更改

添加850字节 、 2021年12月25日 (六) 17:29
无编辑摘要
第366行: 第366行:  
== 选择 ==
 
== 选择 ==
 
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed ''moment-based approaches'' or the so-called ''spectral techniques''. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
 
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed ''moment-based approaches'' or the so-called ''spectral techniques''. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
 +
 +
EM 通常收敛到局部最优,不一定是全局最优,通常对收敛速度没有限制。它可能在高维度上任意差,并且可能存在指数数量的局部最优。因此,尤其是在高维设置中, 需要有保证学习的替代方法。 EM 的替代方案可以更好地保证一致性,称为基于矩的方法(''moment-based approaches)''或所谓的谱技术(''spectral techniques)''。学习概率模型参数的基于矩的方法最近越来越受到关注,因为它们在某些条件下享有诸如全局收敛之类的保证,不像 EM 经常受到陷入局部最优的问题的困扰。具有学习保证的算法可以由许多重要模型(例如混合模型、HMM 等)推导出。对于这些谱方法,不会出现虚假的局部最优,并且在某些规律性条件下可以一致地估计真实参数。
    
<small>This page was moved from [[wikipedia:en:Expectation–maximization algorithm]]. Its edit history can be viewed at [[EM算法/edithistory]]</small>
 
<small>This page was moved from [[wikipedia:en:Expectation–maximization algorithm]]. Its edit history can be viewed at [[EM算法/edithistory]]</small>
 
[[Category:待整理页面]]
 
[[Category:待整理页面]]
12

个编辑