更改

跳到导航 跳到搜索
删除10,522字节 、 2021年12月26日 (日) 14:17
无编辑摘要
第3行: 第3行:  
此词条暂由彩云小译翻译,翻译字数共1877,未经人工整理和审校,带来阅读不便,请见谅。
 
此词条暂由彩云小译翻译,翻译字数共1877,未经人工整理和审校,带来阅读不便,请见谅。
    +
在统计学中,'''期望最大化算法 expectation–maximization algorithm(EM algorithm)'''是一种寻找统计模型中(局部)极大似然或者最大后验 maximum a posteriori(MAP)参数估计的迭代方法,其中的统计模型依赖于未观测到的潜在变量。EM迭代过程中交替执行期望(E)步和最大化(M)步;前者使用当前参数估计值建立对数似然函数的期望函数,后者计算能够最大化E步中获得的期望对数似然函数的参数。这些参数估计值将用于确定下一个E步中潜在变量的分布。
   −
 
+
[[File:EM Clustering of Old Faithful data.gif|right|frame|Old Faithful火山喷发数据的 EM 聚类。随机初始模型(由于轴的不同尺度,看起来是两个非常平和宽的区域)可以拟合观测的数据。在第一次迭代中,模型发生了实质性的变化,但随后收敛到间歇泉的两个模态。可视化使用 ELKI. ]]
In [[statistics]], an '''expectation–maximization''' ('''EM''') '''algorithm''' is an [[iterative method]] to find (local) [[maximum likelihood]] or [[maximum a posteriori]] (MAP) estimates of [[parameter]]s in [[statistical model]]s, where the model depends on unobserved [[latent variable]]s. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the [[Likelihood function#Log-likelihood|log-likelihood]] evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the ''E'' step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
  −
 
  −
在统计学中,期望最大化(EM)算法是一种寻找统计模型中(局部)极大似然或者最大后验(MAP)参数估计的迭代方法,其中的统计模型依赖于未观测到的潜在变量。EM迭代过程中交替执行期望(E)步和最大化(M)步;前者使用当前参数估计值建立对数似然函数的期望函数,后者计算能够最大化E步中获得的期望对数似然函数的参数。这些参数估计值将用于确定下一个E步中潜在变量的分布。
  −
 
  −
 
  −
 
  −
 
  −
[[File:EM Clustering of Old Faithful data.gif|right|frame|Old Faithful火山喷发数据]的 EM 聚类。随机初始模型(由于轴的不同尺度,看起来是两个非常平和宽的区域)可以拟合观测的数据。在第一次迭代中,模型发生了实质性的变化,但随后收敛到间歇泉的两个模态。可视化使用 ELKI. ]]
  −
 
  −
 
         
==历史==
 
==历史==
The EM algorithm was explained and given its name in a classic 1977 paper by [[Arthur P. Dempster|Arthur Dempster]], [[Nan Laird]], and [[Donald Rubin]]. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by [[Cedric Smith (statistician)|Cedric Smith]]. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers following his collaboration with [[Per Martin-Löf]] and [[Anders Martin-Löf]].The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.
+
Arthur Dempster, Nan Laird和Donald Rubin<ref name="Dempster1977">
 
+
{{cite journal
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by [[C. F. Jeff Wu]] in 1983.
+
|last1=Dempster  |first1= A.P. |author-link1=Arthur P. Dempster
Wu's proof established the EM method's convergence outside of the [[exponential family]], as claimed by Dempster–Laird–Rubin.
+
|last2=Laird |first2=N.M. |author-link2=Nan Laird |last3=Rubin
 
+
|first3=D.B. |author-link3=Donald Rubin
 
+
|title=Maximum Likelihood from Incomplete Data via the EM Algorithm
 
+
|journal=[[Journal of the Royal Statistical Society, Series B]]
 
+
|year=1977 |volume=39 |issue=1 |pages=1–38
Arthur Dempster, Nan Laird, and Donald Rubin于1977年发表的一篇经典的论文中解释和命名了EM算法。他们指出该方法被早期作者“多次在特殊条件下提出”。Cedric Smith提出的估计等位基因频率的基因计数法便是其中之一。基于与Per Martin-Löf和Anders Martin-Löf的合作,Rolf Sundberg在他的学位论文和若干论文中详述了针对指数族的EM方法。1977年Dempster–Laird–Rubin的论文推广了该方法并针对更为广泛的问题进行了收敛性分析,该论文确立EM方法作为统计分析的重要工具。
+
|jstor=2984875 |mr= 0501537
Dempster–Laird–Rubin算法的收敛分析存在缺陷,C. F. Jeff Wu在1983年发表了一项修正的收敛性分析。Dempster–Laird–Rubin称,Wu的工作建立了EM方法在指数族之外的收敛。
+
}}
 +
</ref>于1977年发表的一篇经典的论文中解释和命名了EM算法。他们指出该方法被早期作者“多次在特殊条件下提出”。Cedric Smith提出的估计等位基因频率的基因计数法便是其中之一。<ref>{{cite journal |last1=Ceppelini |first1=R.M. |title=The estimation of gene frequencies in a random-mating population |journal=Ann. Hum. Genet. |volume=20 |issue=2 |pages=97–115 |doi=10.1111/j.1469-1809.1955.tb01360.x|pmid=13268982 |year=1955 }}</ref>基于与Per Martin-Löf和Anders Martin-Löf的合作,Rolf Sundberg在他的学位论文和若干论文<ref name="Sundberg1974">{{cite journal
 +
|last=Sundberg |first=Rolf
 +
|title=Maximum likelihood theory for incomplete data from an exponential family
 +
|journal=Scandinavian Journal of Statistics
 +
|volume=1 |year=1974 |issue=2 |pages=49–58
 +
|jstor=4615553 |mr= 381110
 +
}}</ref><ref name="Sundberg1971">
 +
Rolf Sundberg. 1971. ''Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable''. Dissertation, Institute for Mathematical Statistics, Stockholm University.</ref><ref name="Sundberg1976">{{cite journal
 +
|last=Sundberg |first=Rolf
 +
|year=1976
 +
|title=An iterative method for solution of the likelihood equations for incomplete data from exponential families
 +
|journal=Communications in Statistics – Simulation and Computation
 +
|volume=5 |issue=1 |pages=55–64
 +
|doi=10.1080/03610917608812007 |mr=443190
 +
}}</ref>中详述了针对指数族的EM方法。<ref>See the acknowledgement by Dempster, Laird and Rubin on pages 3, 5 and 11.</ref><ref>G. Kulldorff. 1961.'' Contributions to the theory of estimation from grouped and partially grouped samples''. Almqvist & Wiksell.</ref><ref name="Martin-Löf1963">Anders Martin-Löf. 1963. "Utvärdering av livslängder i subnanosekundsområdet" ("Evaluation of sub-nanosecond lifetimes"). ("Sundberg formula")</ref><ref name="Martin-Löf1966">[[Per Martin-Löf]]. 1966. ''Statistics from the point of view of statistical mechanics''. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula" credited to Anders Martin-Löf).</ref><ref name="Martin-Löf1970">[[Per Martin-Löf]]. 1970. ''Statistika Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Notes from seminars in the academic year 1969-1970), with the assistance of Rolf Sundberg.'' Stockholm University. ("Sundberg formula")</ref>1977年Dempster–Laird–Rubin的论文推广了该方法并针对更为广泛的问题进行了收敛性分析,该论文确立EM方法作为统计分析的重要工具。<ref name="Martin-Löf1974a">Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, [[Arthur P. Dempster|A. P. Dempster]], [[D. Basu]], [[D. R. Cox]], [[A. W. F. Edwards]], D. A. Sprott, [[George A. Barnard|G. A. Barnard]], O. Barndorff-Nielsen, J. D. Kalbfleisch and [[Rasch model|G. Rasch]] and a reply by the author. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.</ref><ref name="Martin-Löf1974b">{{cite journal |last=Martin-Löf |first=Per |title=The notion of redundancy and its use as a quantitative measure of the discrepancy between a statistical hypothesis and a set of observational data |journal=Scand. J. Statist. |volume=1 |year=1974 |issue=1 |pages=3–18 }}</ref>
 +
Dempster–Laird–Rubin算法的收敛分析存在缺陷,C. F. Jeff Wu在1983年发表了一项修正的收敛性分析。<ref name="Wu">
 +
{{cite journal|first=C. F. Jeff|last=Wu|title=On the Convergence Properties of the EM Algorithm|journal=[[Annals of Statistics]]|volume=11
 +
|issue=1|date=Mar 1983|pages=95–103|jstor=2240463|doi=10.1214/aos/1176346060|mr= 684867|doi-access=free}}</ref>Dempster–Laird–Rubin称,Wu的工作建立了EM方法在指数族之外的收敛。<ref name="Wu" />
       
== 介绍 ==
 
== 介绍 ==
The EM algorithm is used to find (local) [[maximum likelihood]] parameters of a [[statistical model]] in cases where the equations cannot be solved directly.  Typically these models involve [[latent variable]]s in addition to unknown [[parameters]] and known data observations.  That is, either [[missing values]] exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a [[mixture model]] can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
+
EM 算法用于在无法直接求解方程的情况下查找统计模型的(局部)最大似然参数。通常,除了未知参数和已知数据观察之外,这些模型还涉及潜在变量。也就是说,要么数据中存在缺失值,要么可以通过假设存在更多未观察到的数据点来更简单地制定模型。例如,通过假设每个观察到的数据点都有一个对应的未观察到的数据点或潜在变量,指定每个数据点所属的混合成分,可以更简单地描述混合模型。
   −
Finding a maximum likelihood solution typically requires taking the [[derivative]]s of the [[likelihood function]] with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
     −
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points.  It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a [[saddle point]]. In general, multiple maxima may occur, with no guarantee that the global maximum will be found.  Some likelihoods also have [[Mathematical singularity|singularities]] in them, i.e., nonsensical maxima.  For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
+
寻找最大似然解通常需要对所有未知值、参数和潜在变量求似然函数的导数,并同时求解所得方程。在具有潜在变量的统计模型中,这通常是不可能的。相反,结果通常是一组互锁方程,其中参数的解需要潜在变量的值,反之亦然,但将一组方程代入另一组会产生无法求解的方程。
   −
EM 算法用于在无法直接求解方程的情况下查找统计模型的(局部)最大似然参数。通常,除了未知参数和已知数据观察之外,这些模型还涉及潜在变量。也就是说,要么数据中存在缺失值,要么可以通过假设存在更多未观察到的数据点来更简单地制定模型。例如,通过假设每个观察到的数据点都有一个对应的未观察到的数据点或潜在变量,指定每个数据点所属的混合成分,可以更简单地描述混合模型。
     −
寻找最大似然解通常需要对所有未知值、参数和潜在变量求似然函数的导数,并同时求解所得方程。在具有潜在变量的统计模型中,这通常是不可能的。相反,结果通常是一组互锁方程,其中参数的解需要潜在变量的值,反之亦然,但将一组方程代入另一组会产生无法求解的方程。
+
EM 算法从观察开始到找到方法可以数值求解这两组方程。可以简单地为两组未知数中的一组选择任意值,使用它们来估计第二组,然后使用这些新值来找到对第一组更好的估计,然后在两者之间保持交替,直到结果值都收敛到固定点。<ref name="Wu" /> 这并不明显,但可以证明在这种情况下它确实有效,并且可能性的导数在该点(任意接近)为零,这反过来意味着该点是最大值或一个鞍点。通常,可能会出现多个最大值,但不能保证会找到全局最大值。一些可能性也有奇点,即无意义的最大值。例如,EM 在混合模型中可能找到的解决方案之一包含把其中一个成分设置为具有零方差,并且相同成分的平均参数和一个数据点等价。
   −
EM 算法从观察开始到找到方法可以数值求解这两组方程。可以简单地为两组未知数中的一组选择任意值,使用它们来估计第二组,然后使用这些新值来找到对第一组更好的估计,然后在两者之间保持交替,直到结果值都收敛到固定点。这并不明显,但可以证明在这种情况下它确实有效,并且可能性的导数在该点(任意接近)为零,这反过来意味着该点是最大值或一个鞍点。通常,可能会出现多个最大值,但不能保证会找到全局最大值。一些可能性也有奇点,即无意义的最大值。例如,EM 在混合模型中可能找到的解决方案之一包含把其中一个成分设置为具有零方差,并且相同成分的平均参数和一个数据点等价。
      
==描述==
 
==描述==
 +
给定生成一组 <math>\mathbf{X}</math> 观察数据,一组未观察到的潜在数据或缺失值 <math>\mathbf{Z}</math>的统计模型, 和未知参数向量 <math>\boldsymbol\theta</math>,以及似然函数 <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>,未知参数的最大似然估计 maximum likelihood estimate(MLE) 通过最大化观察数据的边际可能性来决定。
   −
Given the [[statistical model]] which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or [[missing values]] <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a [[likelihood function]] <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the [[maximum likelihood estimate]] (MLE) of the unknown parameters is determined by maximizing the [[marginal likelihood]] of the observed data.
     −
<nowiki>给定生成一组 {\displaystyle \mathbf {X} }\mathbf {X} 观察数据,一组未观察到的潜在数据或缺失值 {\displaystyle \mathbf {Z} }\mathbf {Z} 的统计模型, 和未知参数向量 {\displaystyle {\boldsymbol {\theta }}}{\boldsymbol {\theta }},以及似然函数 {\displaystyle L({\boldsymbol {\theta }};\mathbf {X } ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})}{\displaystyle L({\boldsymbol {\theta }};\mathbf {X} ,\mathbf {Z} )=p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})},未知参数的最大似然估计(MLE) 通过最大化观察数据的边际可能性来决定。</nowiki>
+
:<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int  p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} </math>
   −
:<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int  p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} </math>
+
然而,这个量通常是难以处理的,因为<math>\mathbf{Z}</math>是不可观察的,并且<math>\mathbf{Z}</math> 的分布在达到<math>\boldsymbol\theta</math>之前是未知的。
   −
However, this quantity is often intractable since [math]\displaystyle{ \mathbf{Z} }[/math] is unobserved and the distribution of [math]\displaystyle{ \mathbf{Z} }[/math] is unknown before attaining [math]\displaystyle{ \boldsymbol\theta }[/math] .
     −
<nowiki>然而,这个量通常是难以处理的,因为 {\displaystyle \mathbf {Z} }\mathbf {Z} 是不可观察的,并且 {\displaystyle \mathbf {Z} }\mathbf {Z} 的分布在达到 {\displaystyle { \boldsymbol {\theta }}}{\boldsymbol {\theta }}之前是未知的。</nowiki>
+
EM算法试图通过迭代应用这两个步骤来找到边际可能性的最大似然估计:
   −
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
+
:''期望步(E步)'': 定义<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math>作为 <math>\boldsymbol\theta</math> 的对数似然函数的期望值 ,关于 <math>\mathbf{Z}</math> 的当前条件分布给定 <math>\mathbf{X}</math> 和参数<math>\boldsymbol\theta^{(t)}</math>:
   −
EM算法试图通过迭代应用这两个步骤来找到边际可能性的最大似然估计:
     −
:''Expectation step (E step)'': Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the [[expected value]] of the log [[likelihood function]] of <math>\boldsymbol\theta</math>, with respect to the current [[conditional probability distribution|conditional distribution]] of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>:
  −
:<nowiki>期望步(E步):定义 {\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}{\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})} 作为 {\displaystyle {\boldsymbol {\theta }}}{\boldsymbol {\theta }} 的对数似然函数的期望值 ,关于 {\displaystyle \mathbf {Z} }\mathbf {Z} 的当前条件分布给定 {\displaystyle \mathbf {X} }\mathbf {X} 和参数 {\displaystyle {\ 粗体符号 {\theta }}^{(t)}}\boldsymbol\theta^{(t)}:</nowiki>
   
::<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z})  \right] \,</math>
 
::<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z})  \right] \,</math>
::''Maximization step (M step)'': Find the parameters that maximize this quantity:
+
 
::最大化步(M步):找到使数量最大化的参数:
+
::''最大化步(M步)'': 找到使数量最大化的参数:
 
::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math>
 
::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math>
::
  −
::The typical models to which EM is applied use <math>\mathbf{Z}</math> as a latent variable indicating membership in one of a set of groups:
  −
::应用 EM 的典型模型使用 [math]\displaystyle{ \mathbf{Z} }[/math] 作为潜在变量,表明属于一组组之一:
  −
::1. The observed data points <math>\mathbf{X}</math> may be [[discrete random variable|discrete]] (taking values in a finite or countably infinite set) or [[continuous random variable|continuous]] (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
  −
::2.The [[missing values]] (aka [[latent variables]]) <math>\mathbf{Z}</math> are [[discrete random variable|discrete]], drawn from a fixed number of values, and with one latent variable per observed unit.
  −
::3.The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value). 1. 观察到的数据点 [math]\displaystyle{ \mathbf{X} }[/math] 可能是离散的(取有限或可数无限集合中的值)或连续(取不可数无限集合中的值)。 与每个数据点相关联的可能是观察向量。  2. 缺失值(又名潜在变量)[math]\displaystyle{ \mathbf{Z} }[/math] 是离散的,从固定数量的值中提取,每个观察单位有一个潜在变量。  3.参数是连续的,分为两种:与所有数据点相关的参数,以及与潜在变量的特定值相关的参数(即与对应潜在变量具有该值的所有数据点相关联)。
     −
However, it is possible to apply EM to other sorts of models.
+
::应用 EM 的典型模型使用<math>\mathbf{Z}</math>作为潜在变量,表明属于一组组之一:
 +
 
 +
::1. 观察到的数据点<math>\mathbf{X}</math>可能是离散的(取有限或可数无限集合中的值)或连续(取不可数无限集合中的值)。 与每个数据点相关联的可能是观察向量。
 +
::2. 缺失值(又名潜在变量)<math>\mathbf{Z}</math>是离散的,从固定数量的值中提取,每个观察单位有一个潜在变量。
 +
::3. 参数是连续的,分为两种:与所有数据点相关的参数,以及与潜在变量的特定值相关的参数(即与对应潜在变量具有该值的所有数据点相关联)。
 +
 
    
然而,EM也可以应用到其他模型中。
 
然而,EM也可以应用到其他模型中。
   −
The motive is as follows.  If the value of the parameters <math>\boldsymbol\theta</math> is known, usually the value of the latent variables <math>\mathbf{Z}</math> can be found by maximizing the log-likelihood over all possible values of <math>\mathbf{Z}</math>, either simply by iterating over <math>\mathbf{Z}</math> or through an algorithm such as the [[Baum–Welch algorithm]] for [[hidden Markov model]]s.  Conversely, if we know the value of the latent variables <math>\mathbf{Z}</math>, we can find an estimate of the parameters <math>\boldsymbol\theta</math> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group.  This suggests an iterative algorithm, in the case where both <math>\boldsymbol\theta</math> and <math>\mathbf{Z}</math> are unknown:
+
原因如下。如果已知参数 <math>\boldsymbol\theta</math>的值,通常可以通过最大化 <math>\mathbf{Z}</math> 的所有可能值的对数似然找到潜变量<math>\mathbf{Z}</math>的值,或者简单地通过迭代 <math>\mathbf{Z}</math>或通过一种算法,例如用于隐马尔可夫模型的 Baum-Welch 算法。相反,如果我们知道潜在变量<math>\mathbf{Z}</math>的值,我们可以非常容易找到参数 <math>\boldsymbol\theta</math> 的估计,通常只需根据相关潜在变量的值对观察到的数据点进行分组,并对每组中点的值或值的某些函数求平均值。这表明了一种在<math>\boldsymbol\theta</math><math>\mathbf{Z}</math> 都是未知情况下的迭代算法:
   −
原因如下。如果已知参数 [math]\displaystyle{ \boldsymbol\theta }[/math] 的值,通常可以通过最大化 [math]\displaystyle{ \mathbf{Z} }[/math] 的所有可能值的对数似然找到潜变量 [math]\displaystyle{ \mathbf{Z} }[/math] 的值,或者简单地通过迭代 [math]\displaystyle{ \mathbf{Z} }[/math] 或通过一种算法,例如用于隐马尔可夫模型的 Baum-Welch 算法。相反,如果我们知道潜在变量 [math]\displaystyle{ \mathbf{Z} }[/math] 的值,我们可以非常容易找到参数 [math]\displaystyle{ \boldsymbol\theta }[/math] 的估计,通常只需根据相关潜在变量的值对观察到的数据点进行分组,并对每组中点的值或值的某些函数求平均值。这表明了一种在 [math]\displaystyle{ \boldsymbol\theta }[/math] 和 [math]\displaystyle{ \mathbf{Z} }[/math] 都是未知情况下的迭代算法:
     −
#First, initialize the parameters <math>\boldsymbol\theta</math> to some random values.
+
#首先,将参数<math>\boldsymbol\theta</math> 初始化为一些随机值。
#Compute the probability of each possible value of <math>\mathbf{Z}</math> , given <math>\boldsymbol\theta</math>.
+
#计算<math>\mathbf{Z}</math> 的每个可能值的概率,给定<math>\boldsymbol\theta</math>
#Then, use the just-computed values of <math>\mathbf{Z}</math> to compute a better estimate for the parameters <math>\boldsymbol\theta</math>.
+
#然后,使用刚刚计算的 <math>\mathbf{Z}</math>来计算参数<math>\boldsymbol\theta</math>的更好估计。
#Iterate steps 2 and 3 until convergence.                                                                                                                                                                                                                                            1.首先,将参数 [math]\displaystyle{ \boldsymbol\theta }[/math] 初始化为一些随机值。 2.计算 [math]\displaystyle{ \mathbf{Z} }[/math] 的每个可能值的概率,给定 [math]\displaystyle{ \boldsymbol\theta }[/math]。  3.然后,使用刚刚计算的 [math]\displaystyle{ \mathbf{Z} }[/math] 来计算参数 [math]\displaystyle{ \boldsymbol\theta }[/math] 的更好估计。  4.迭代步骤 2 和 3 直到收敛。
+
#迭代步骤 2 和 3 直到收敛。                                                                                                                                                                                                                                        
   −
The algorithm as just described monotonically approaches a local minimum of the cost function.
      
刚刚描述的算法单调地接近成本函数的局部最小值。
 
刚刚描述的算法单调地接近成本函数的局部最小值。
 +
    
== 特性 ==
 
== 特性 ==
 +
说到期望 (E) 步骤有点用词不当。 在第一步中计算的是函数''Q''的固定的、与数据相关的参数。一旦''Q''的参数已知,就可以完全确定并在 EM 算法的第二步 (M) 中最大化。
   −
Speaking of an expectation (E) step is a bit of a misnomer. What are calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
  −
  −
说到期望 (E) 步骤有点用词不当。 在第一步中计算的是函数 Q 的固定的、与数据相关的参数。一旦 Q 的参数已知,就可以完全确定并在 EM 算法的第二步 (M) 中最大化。
  −
  −
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates ''θ''<sup>(''t'')</sup>), or applying simulated annealing methods.
     −
尽管 EM 迭代确实增加了观察到的数据(即边际)似然函数,但不能保证序列收敛到最大似然估计量。 对于多峰分布,这意味着 EM 算法可能会收敛到观察到的数据似然函数的局部最大值,这取决于起始值。 存在各种启发式或元启发式方法来逃避局部最大值,例如随机重启爬山(从几个不同的随机初始估计值 θ(t) 开始),或应用模拟退火方法。
+
尽管 EM 迭代确实增加了观察到的数据(即边际)似然函数,但不能保证序列收敛到最大似然估计量。 对于多峰分布,这意味着 EM 算法可能会收敛到观察到的数据似然函数的局部最大值,这取决于起始值。 存在各种启发式或元启发式方法来逃避局部最大值,例如随机重启爬山(从几个不同的随机初始估计值''θ''<sup>(''t'')</sup>开始),或应用模拟退火方法。
   −
EM is especially useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).
      
当似然是指数族时,EM 尤其有用:E 步成为充分统计量的期望总和,而 M 步涉及最大化线性函数。 在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发布,使用 Per Martin-Löf 和 Anders Martin-Löf 的未发表结果)为每个步骤导出封闭形式的表达式更新。
 
当似然是指数族时,EM 尤其有用:E 步成为充分统计量的期望总和,而 M 步涉及最大化线性函数。 在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发布,使用 Per Martin-Löf 和 Anders Martin-Löf 的未发表结果)为每个步骤导出封闭形式的表达式更新。
   −
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
     −
在 Dempster、Laird 和 Rubin 的原始论文中,EM 方法被修改为计算贝叶斯推理的最大后验 (MAP) 估计。
+
在 Dempster、Laird 和 Rubin 的原始论文中,EM 方法被修改为计算贝叶斯推理的最大后验估计。
   −
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
      
存在其他方法来寻找最大似然估计,例如梯度下降、共轭梯度或高斯-牛顿算法的变体。 与 EM 不同,此类方法通常需要评估似然函数的一阶和/或二阶导数。
 
存在其他方法来寻找最大似然估计,例如梯度下降、共轭梯度或高斯-牛顿算法的变体。 与 EM 不同,此类方法通常需要评估似然函数的一阶和/或二阶导数。
 +
    
== 正确性证明 ==
 
== 正确性证明 ==
 +
期望最大化可以改善<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math>而不是直接改进<math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math> 。 这里表明对前者的改进意味着对后者的改进。<ref name="Little1987">{{cite book |last1=Little |first1= Roderick J.A. |last2= Rubin |first2= Donald B. |author2-link= Donald Rubin |title= Statistical Analysis with Missing Data |url=https://archive.org/details/statisticalanaly00litt |url-access=limited | series = Wiley Series in Probability and Mathematical Statistics |year= 1987 |publisher= John Wiley & Sons |location= New York |isbn= 978-0-471-80254-9 |pages= [https://archive.org/details/statisticalanaly00litt/page/n145 134]–136}}</ref>
   −
Expectation-maximization works to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] rather than directly improving [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math]. Here is shown that improvements to the former imply improvements to the latter.
+
对于任何具有非零概率 <math>\mathbf{Z}</math> with non-zero probability <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta)</math>,我们可以写
   −
<nowiki>期望最大化可以改善 {\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}{\displaystyle Q({\boldsymbol {\theta }} \mid {\boldsymbol {\theta }}^{(t)})} 而不是直接改进 {\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})}{\displaystyle \ log p(\mathbf {X} \mid {\boldsymbol {\theta }})}。 这里表明对前者的改进意味着对后者的改进。</nowiki>
+
: <math>
 +
\log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta).
 +
</math>
   −
For any [math]\displaystyle{ \mathbf{Z} }[/math] with non-zero probability [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) }[/math], we can write
+
我们在当前参数估计<math>\theta^{(t)}</math>下对未知数据的可能值取期望值<math>\mathbf{Z}</math>两边乘以<math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)})</math> 并在 <math>\mathbf{Z}</math>上求和(或积分)。左边是一个常数的期望,所以我们得到:
   −
<nowiki>对于任何具有非零概率 {\displaystyle \mathbf {Z} }\mathbf {Z} {\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})}{ \displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})},我们可以写</nowiki>
+
: <math>
 +
\begin{align}
 +
\log p(\mathbf{X}\mid\boldsymbol\theta) &
 +
= \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta)
 +
- \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\
 +
& = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}),
 +
\end{align}
 +
</math>
   −
<nowiki>{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})=\log p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})-\log p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}).}{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})=\log p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})-\log p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}).}</nowiki>
+
其中 <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> 由它正在替换的否定和定义。最后一个方程适用于<math>\boldsymbol\theta</math> 的每个值,包括 <math>\boldsymbol\theta = \boldsymbol\theta^{(t)}</math>,
 +
: <math>
 +
\log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
 +
= Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}),
 +
</math>
   −
We take the expectation over possible values of the unknown data [math]\displaystyle{ \mathbf{Z} }[/math] under the current parameter estimate [math]\displaystyle{ \theta^{(t)} }[/math] by multiplying both sides by [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) }[/math] and summing (or integrating) over [math]\displaystyle{ \mathbf{Z} }[/math]. The left-hand side is the expectation of a constant, so we get:
  −
  −
<nowiki>我们在当前参数估计 {\displaystyle \theta ^{(t)}}\theta^{(t)} 下对未知数据的可能值取期望值 {\displaystyle \mathbf {Z} }\mathbf {Z} 两边乘以 {\displaystyle p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})}{\displaystyle p(\mathbf {Z} \ mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})} 并在 {\displaystyle \mathbf {Z} }\mathbf {Z} 上求和(或积分)。 左边是一个常数的期望,所以我们得到:</nowiki>
  −
  −
<nowiki>{\displaystyle {\begin{aligned}\log p(\mathbf {X} \mid {\boldsymbol {\theta }})&=\sum _{\mathbf {Z} }p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})\log p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})-\sum _{\mathbf {Z} }p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})\log p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})\\&=Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)}),\end{aligned}}}{\displaystyle {\begin{aligned}\log p(\mathbf {X} \mid {\boldsymbol {\theta }})&=\sum _{\mathbf {Z} }p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})\log p(\mathbf {X} ,\mathbf {Z} \mid {\boldsymbol {\theta }})-\sum _{\mathbf {Z} }p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }}^{(t)})\log p(\mathbf {Z} \mid \mathbf {X} ,{\boldsymbol {\theta }})\\&=Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)}),\end{aligned}}}</nowiki>
  −
  −
where [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] is defined by the negated sum it is replacing. This last equation holds for every value of [math]\displaystyle{ \boldsymbol\theta }[/math] including [math]\displaystyle{ \boldsymbol\theta = \boldsymbol\theta^{(t)} }[/math],
  −
  −
<nowiki>其中 {\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})}{\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})} 由它正在替换的否定和定义。 最后一个方程适用于 {\displaystyle {\boldsymbol {\theta }}}{\boldsymbol {\theta }} 的每个值,包括 {\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}^ {(t)}}\boldsymbol\theta = \boldsymbol\theta^{(t)},</nowiki>
  −
  −
<nowiki>{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})=Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}),}{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})=Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}),}</nowiki>
  −
  −
and subtracting this last equation from the previous equation gives
      
并从前一个方程中减去最后一个方程给出
 
并从前一个方程中减去最后一个方程给出
 +
: <math>
 +
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
 +
= Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})
 +
+ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}),
 +
</math>
   −
<nowiki>{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})-\log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})=Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}),}{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})-\log p(\mathbf {X} \mid {\boldsymbol {\theta }}^{(t)})=Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-Q({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})+H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})-H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)}),}</nowiki>
  −
  −
However, Gibbs' inequality tells us that [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) }[/math], so we can conclude that
  −
  −
<nowiki>然而,吉布斯不等式告诉我们 {\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})\geq H({\boldsymbol {\theta }} ^{(t)}\mid {\boldsymbol {\theta }}^{(t)})}{\displaystyle H({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{( t)})\geq H({\boldsymbol {\theta }}^{(t)}\mid {\boldsymbol {\theta }}^{(t)})},所以我们可以得出结论</nowiki>
     −
[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) \ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}). }[/math]
+
然而,吉布斯不等式告诉我们 <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})</math>,所以我们可以得出结论
 +
: <math>
 +
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
 +
\ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}).
 +
</math>
   −
In words, choosing [math]\displaystyle{ \boldsymbol\theta }[/math] to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] causes [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math] to improve at least as much.
     −
<nowiki>换句话说,选择 {\displaystyle {\boldsymbol {\theta }}}{\boldsymbol {\theta }} 来改进 {\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^ {(t)})}{\displaystyle Q({\boldsymbol {\theta }}\mid {\boldsymbol {\theta }}^{(t)})} 导致 {\displaystyle \log p(\mathbf {X } \mid {\boldsymbol {\theta }})}{\displaystyle \log p(\mathbf {X} \mid {\boldsymbol {\theta }})} 至少有同样的改进。</nowiki>
+
换句话说,选择<math>\boldsymbol\theta</math>来改进<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> 导致 <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math> 至少有同样的改进。
     
7,129

个编辑

导航菜单