更改

跳到导航 跳到搜索
删除1,382字节 、 2020年7月23日 (四) 09:50
第540行: 第540行:  
==== 稀疏字典学习 Sparse dictionary learning ====
 
==== 稀疏字典学习 Sparse dictionary learning ====
   −
{{Main|Sparse dictionary learning}}
+
:''主文章:[https://en.wikipedia.org/wiki/Sparse_dictionary_learning 稀疏字典学习]''
 +
稀疏词典学习是一种特征学习方法,在这种方法中,将数据表示为[https://en.wikipedia.org/wiki/Basis_function 基函数]的线性组合,并假定系数是稀疏的。设x是d维数据,D是d乘n矩阵,其中D的每一列代表一个基函数,r是用D表示x的系数。数学上,稀疏字典学习意味着求解 <math> x\approx Dr ,r </math>是稀疏的。一般说来,假设n大于d,以便稀疏表示。
   −
Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of [[basis function]]s, and is assumed to be a [[sparse matrix]]. The method is [[strongly NP-hard]] and difficult to solve approximately.<ref>{{cite journal |first=A. M. |last=Tillmann |title=On the Computational Intractability of Exact and Approximate Dictionary Learning |journal=IEEE Signal Processing Letters |volume=22 |issue=1 |year=2015 |pages=45–49 |doi=10.1109/LSP.2014.2345761|bibcode=2015ISPL...22...45T |arxiv=1405.6664 }}</ref> A popular [[heuristic]] method for sparse dictionary learning is the [[K-SVD]] algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in [[image de-noising]]. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.<ref>Aharon, M, M Elad, and A Bruckstein. 2006. "[http://sites.fas.harvard.edu/~cs278/papers/ksvd.pdf K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation]." Signal Processing, IEEE Transactions on 54 (11): 4311–4322</ref>
+
学习字典和稀疏表示是[https://en.wikipedia.org/wiki/Strongly_NP-hard 强NP难解]的,也很难近似求解<ref>{{cite journal |first=A. M. |last=Tillmann |title=On the Computational Intractability of Exact and Approximate Dictionary Learning |journal=IEEE Signal Processing Letters |volume=22 |issue=1 |year=2015 |pages=45–49 |bibcode;2015ISPL...22...45T |arxiv:1405.6664 }}</ref> 。稀疏字典学习的一种流行的启发式方法是[https://en.wikipedia.org/wiki/K-SVD K-SVD]
   −
Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions, and is assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the K-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.
+
稀疏字典学习已经在几种环境中得到了应用。在分类中,问题是确定以前看不见的数据属于哪些类。假设已经为每个类构建了一个字典。然后,将一个新的数据与类相关联,使得它被相应的字典最优表示。稀疏字典学习也被应用于[https://en.wikipedia.org/wiki/Image_de-noising 图像去噪]。关键的思想是一个干净的图像补丁可以用图像字典来稀疏地表示,但是噪声却不能<ref>Aharon, M, M Elad, and A Bruckstein. 2006. "K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation." Signal Processing, IEEE Transactions on 54 (11): 4311–4322</ref>。
 
  −
稀疏词典学习是一种特征学习方法,在这种方法中,一个训练样本被表示为基函数的线性组合,并假设为稀疏矩阵。该方法具有强 NP- Hard性并且近似求解困难。一种流行的'''启发式 Heuristic'''稀疏字典学习方法是 K-SVD 算法。稀疏词典学习已经应用于以下几种情况下:在分类中,问题在于如何确定先前未见的训练样本所属的类;对于已经构建了每个类的字典,一个新的训练示例将与相应的字典最好地稀疏表示的类相关联。稀疏字典学习也被广泛应用到图像去噪的问题中。其关键思想是,一个干净的图像'''补丁 patch'''可以由图像字典稀疏地表示,但噪声不能。
      
==== 异常检测 Anomaly detection ====
 
==== 异常检测 Anomaly detection ====
463

个编辑

导航菜单