更改

跳到导航 跳到搜索
第428行: 第428行:  
==== 特征学习 Feature learning ====
 
==== 特征学习 Feature learning ====
   −
{{Main|Feature learning}}
+
:''主文章:[https://en.wikipedia.org/wiki/Representation_learning 表示学习]''
 
+
一些学习算法,大多是[https://en.wikipedia.org/wiki/Unsupervised_learning 无监督学习]算法,旨在发现更好的输入的训练数据的表示。经典的例子包括[https://en.wikipedia.org/wiki/Principal_component_analysis 主成分分析][https://en.wikipedia.org/wiki/Cluster_analysis 聚类分析]。表示学习算法通常试图在输入中保留信息,并将其转换成有用的方式,通常是在执行分类或预测之前的预处理步骤,允许重构来自未知数据生成分布的输入,而不一定对不太可能服从该分布的结构可靠。
 
  −
 
  −
Several learning algorithms aim at discovering better representations of the inputs provided during training.<ref name="pami">{{cite journal |author1=Y. Bengio |author2=A. Courville |author3=P. Vincent |title=Representation Learning: A Review and New Perspectives |journal= IEEE Transactions on Pattern Analysis and Machine Intelligence|year=2013|doi=10.1109/tpami.2013.50 |pmid=23787338 |volume=35 |issue=8 |pages=1798–1828|arxiv=1206.5538 }}</ref> Classic examples include [[principal components analysis]] and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual [[feature engineering]], and allows a machine to both learn the features and use them to perform a specific task.
  −
 
  −
Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal components analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task.
  −
 
  −
一些学习算法旨在发现更好的训练数据输入的对应表示,其典型的例子包括'''主成分分析 Principal Components Analysis'''和'''聚类分析 Cluster Analysis'''。'''特征学习 Feature Learning'''算法,也称为'''表征学习 Representation Learning'''算法,通常试图保留输入中的信息,但也可以使用有效的方式对输入进行转换从而达到提升学习效率和效果的目的,通常作为执行分类或预测行为之前的预处理步骤。这种技术可以重构来自未知数据分布生成的输入,但不一定忠实于在这种分布下不可信的配置。这取代了手工特性工程,并且允许机器学习特性并使用它们来执行特定的任务。
      +
[https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Manifold_learning_algorithms 流形学习]算法尝试处理被学习的数据表示是低维的情况。[https://en.wikipedia.org/wiki/Neural_coding#Sparse_coding 稀疏编码]算法尝试处理被学习的数据表示是稀疏(有多个零)的情况。[https://en.wikipedia.org/wiki/Multilinear_subspace_learning 多线性子空间学习]算法的目的是直接从多维数据的[https://en.wikipedia.org/wiki/Tensor 张量]表示中学习低维表示,而不将它们重构成(高维)向量<ref>{{cite journal |first1=Haiping |last1=Lu |first2=K.N. |last2=Plataniotis |first3=A.N. |last3=Venetsanopoulos |url=http://www.dsp.utoronto.ca/~haiping/Publication/SurveyMSL_PR2011.pdf |title=A Survey of Multilinear Subspace Learning for Tensor Data |journal=Pattern Recognition |volume=44 |number=7 |pages=1540–1551 |year=2011 }}</ref>。深度学习算法能发现数据表示的多个层次,或者由低级特征定义(或生成)的更高、更抽象的特征层次。有人认为,智能机器是一种学习表示法的机器,它能找出那些解释观测数据的潜在变异因素<ref>{{cite book
 +
| title = Learning Deep Architectures for AI
 +
| author = Yoshua Bengio
 +
| publisher = Now Publishers Inc.
 +
| year = 2009
 +
| isbn : 978-1-60198-294-0
 +
| pages = 1–3
 +
| url = https://books.google.com/books?id=cq5ewg7FniMC&pg=PA3
 +
}}</ref>。
     
463

个编辑

导航菜单