“EM算法”的版本间的差异

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索
第18行: 第18行:
  
 
==历史==
 
==历史==
The EM algorithm was explained and given its name in a classic 1977 paper by [[Arthur P. Dempster|Arthur Dempster]], [[Nan Laird]], and [[Donald Rubin]].<ref name="Dempster1977">
+
The EM algorithm was explained and given its name in a classic 1977 paper by [[Arthur P. Dempster|Arthur Dempster]], [[Nan Laird]], and [[Donald Rubin]]. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by [[Cedric Smith (statistician)|Cedric Smith]]. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers following his collaboration with [[Per Martin-Löf]] and [[Anders Martin-Löf]].The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.
{{cite journal
+
 
|last1=Dempster  |first1= A.P. |author-link1=Arthur P. Dempster
+
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by [[C. F. Jeff Wu]] in 1983.
|last2=Laird |first2=N.M. |author-link2=Nan Laird |last3=Rubin
+
Wu's proof established the EM method's convergence outside of the [[exponential family]], as claimed by Dempster–Laird–Rubin.
|first3=D.B. |author-link3=Donald Rubin
 
|title=Maximum Likelihood from Incomplete Data via the EM Algorithm
 
|journal=[[Journal of the Royal Statistical Society, Series B]]
 
|year=1977 |volume=39 |issue=1 |pages=1–38
 
|jstor=2984875 |mr= 0501537
 
}}
 
</ref> They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by [[Cedric Smith (statistician)|Cedric Smith]].<ref>{{cite journal |last1=Ceppelini |first1=R.M. |title=The estimation of gene frequencies in a random-mating population |journal=Ann. Hum. Genet. |volume=20 |issue=2 |pages=97–115 |doi=10.1111/j.1469-1809.1955.tb01360.x|pmid=13268982 |year=1955 |s2cid=38625779 }}</ref>  A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers<ref name="Sundberg1974">{{cite journal
 
|last=Sundberg |first=Rolf
 
|title=Maximum likelihood theory for incomplete data from an exponential family
 
|journal=Scandinavian Journal of Statistics
 
|volume=1 |year=1974 |issue=2 |pages=49–58
 
|jstor=4615553 |mr= 381110
 
}}</ref><ref name="Sundberg1971">
 
Rolf Sundberg. 1971. ''Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable''. Dissertation, Institute for Mathematical Statistics, Stockholm University.</ref><ref name="Sundberg1976">{{cite journal
 
|last=Sundberg |first=Rolf
 
|year=1976
 
|title=An iterative method for solution of the likelihood equations for incomplete data from exponential families
 
|journal=Communications in Statistics – Simulation and Computation
 
|volume=5 |issue=1 |pages=55–64
 
|doi=10.1080/03610917608812007 |mr=443190
 
}}</ref> following his collaboration with [[Per Martin-Löf]] and [[Anders Martin-Löf]].<ref>See the acknowledgement by Dempster, Laird and Rubin on pages 3, 5 and 11.</ref><ref>G. Kulldorff. 1961.'' Contributions to the theory of estimation from grouped and partially grouped samples''. Almqvist & Wiksell.</ref><ref name="Martin-Löf1963">Anders Martin-Löf. 1963. "Utvärdering av livslängder i subnanosekundsområdet" ("Evaluation of sub-nanosecond lifetimes"). ("Sundberg formula")</ref><ref name="Martin-Löf1966">[[Per Martin-Löf]]. 1966. ''Statistics from the point of view of statistical mechanics''. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula" credited to Anders Martin-Löf).</ref><ref name="Martin-Löf1970">[[Per Martin-Löf]]. 1970. ''Statistika Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Notes from seminars in the academic year 1969-1970), with the assistance of Rolf Sundberg.'' Stockholm University. ("Sundberg formula")</ref><!-- * Martin-Löf, P. "Exact tests, confidence regions and estimates", with a discussion by [[A. W. F. Edwards]], [[George A. Barnard|G. A. Barnard]], D. A. Sprott, O. Barndorff-Nielsen, [[D. Basu]] and [[Rasch model|G. Rasch]]. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 121–138. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974. --><ref name="Martin-Löf1974a">Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, [[Arthur P. Dempster|A. P. Dempster]], [[D. Basu]], [[D. R. Cox]], [[A. W. F. Edwards]], D. A. Sprott, [[George A. Barnard|G. A. Barnard]], O. Barndorff-Nielsen, J. D. Kalbfleisch and [[Rasch model|G. Rasch]] and a reply by the author. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.</ref><ref name="Martin-Löf1974b">{{cite journal |last=Martin-Löf |first=Per |title=The notion of redundancy and its use as a quantitative measure of the discrepancy between a statistical hypothesis and a set of observational data |journal=Scand. J. Statist. |volume=1 |year=1974 |issue=1 |pages=3–18 }}</ref> The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.
 
  
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by [[C. F. Jeff Wu]] in 1983.<ref name="Wu">
 
{{cite journal
 
|first=C. F. Jeff
 
|last=Wu
 
|title=On the Convergence Properties of the EM Algorithm
 
|journal=[[Annals of Statistics]]
 
|volume=11
 
|issue=1
 
|date=Mar 1983
 
|pages=95–103
 
|jstor=2240463
 
|doi=10.1214/aos/1176346060
 
|mr= 684867
 
|doi-access=free
 
}}</ref>
 
Wu's proof established the EM method's convergence outside of the [[exponential family]], as claimed by Dempster–Laird–Rubin.<ref name="Wu" />
 
  
  
第69行: 第32行:
 
== 介绍 ==
 
== 介绍 ==
 
The EM algorithm is used to find (local) [[maximum likelihood]] parameters of a [[statistical model]] in cases where the equations cannot be solved directly.  Typically these models involve [[latent variable]]s in addition to unknown [[parameters]] and known data observations.  That is, either [[missing values]] exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a [[mixture model]] can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
 
The EM algorithm is used to find (local) [[maximum likelihood]] parameters of a [[statistical model]] in cases where the equations cannot be solved directly.  Typically these models involve [[latent variable]]s in addition to unknown [[parameters]] and known data observations.  That is, either [[missing values]] exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a [[mixture model]] can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
 +
 +
Finding a maximum likelihood solution typically requires taking the [[derivative]]s of the [[likelihood function]] with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
 +
 +
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points.  It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a [[saddle point]]. In general, multiple maxima may occur, with no guarantee that the global maximum will be found.  Some likelihoods also have [[Mathematical singularity|singularities]] in them, i.e., nonsensical maxima.  For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
  
 
EM 算法用于在无法直接求解方程的情况下查找统计模型的(局部)最大似然参数。通常,除了未知参数和已知数据观察之外,这些模型还涉及潜在变量。也就是说,要么数据中存在缺失值,要么可以通过假设存在更多未观察到的数据点来更简单地制定模型。例如,通过假设每个观察到的数据点都有一个对应的未观察到的数据点或潜在变量,指定每个数据点所属的混合成分,可以更简单地描述混合模型。
 
EM 算法用于在无法直接求解方程的情况下查找统计模型的(局部)最大似然参数。通常,除了未知参数和已知数据观察之外,这些模型还涉及潜在变量。也就是说,要么数据中存在缺失值,要么可以通过假设存在更多未观察到的数据点来更简单地制定模型。例如,通过假设每个观察到的数据点都有一个对应的未观察到的数据点或潜在变量,指定每个数据点所属的混合成分,可以更简单地描述混合模型。
 
Finding a maximum likelihood solution typically requires taking the [[derivative]]s of the [[likelihood function]] with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
 
  
 
寻找最大似然解通常需要对所有未知值、参数和潜在变量求似然函数的导数,并同时求解所得方程。在具有潜在变量的统计模型中,这通常是不可能的。相反,结果通常是一组互锁方程,其中参数的解需要潜在变量的值,反之亦然,但将一组方程代入另一组会产生无法求解的方程。
 
寻找最大似然解通常需要对所有未知值、参数和潜在变量求似然函数的导数,并同时求解所得方程。在具有潜在变量的统计模型中,这通常是不可能的。相反,结果通常是一组互锁方程,其中参数的解需要潜在变量的值,反之亦然,但将一组方程代入另一组会产生无法求解的方程。
 
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points.  It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a [[saddle point]].<ref name="Wu" /> In general, multiple maxima may occur, with no guarantee that the global maximum will be found.  Some likelihoods also have [[Mathematical singularity|singularities]] in them, i.e., nonsensical maxima.  For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
 
  
 
EM 算法从观察开始到找到方法可以数值求解这两组方程。可以简单地为两组未知数中的一组选择任意值,使用它们来估计第二组,然后使用这些新值来找到对第一组更好的估计,然后在两者之间保持交替,直到结果值都收敛到固定点。这并不明显,但可以证明在这种情况下它确实有效,并且可能性的导数在该点(任意接近)为零,这反过来意味着该点是最大值或一个鞍点。通常,可能会出现多个最大值,但不能保证会找到全局最大值。一些可能性也有奇点,即无意义的最大值。例如,EM 在混合模型中可能找到的解决方案之一包含把其中一个成分设置为具有零方差,并且相同成分的平均参数和一个数据点等价。
 
EM 算法从观察开始到找到方法可以数值求解这两组方程。可以简单地为两组未知数中的一组选择任意值,使用它们来估计第二组,然后使用这些新值来找到对第一组更好的估计,然后在两者之间保持交替,直到结果值都收敛到固定点。这并不明显,但可以证明在这种情况下它确实有效,并且可能性的导数在该点(任意接近)为零,这反过来意味着该点是最大值或一个鞍点。通常,可能会出现多个最大值,但不能保证会找到全局最大值。一些可能性也有奇点,即无意义的最大值。例如,EM 在混合模型中可能找到的解决方案之一包含把其中一个成分设置为具有零方差,并且相同成分的平均参数和一个数据点等价。
第82行: 第45行:
 
==描述==
 
==描述==
  
Given the [[statistical model]] which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or [[missing values]] <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a [[likelihood function]] <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the [[maximum likelihood estimate]] (MLE) of the unknown parameters is determined by maximizing the [[marginal likelihood]] of the observed data
+
Given the [[statistical model]] which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or [[missing values]] <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a [[likelihood function]] <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the [[maximum likelihood estimate]] (MLE) of the unknown parameters is determined by maximizing the [[marginal likelihood]] of the observed data.
 
 
EM is frequently used for data clustering in machine learning and computer vision. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars.
 
 
 
EM 是机器学习和计算机视觉中常用的数据聚类算法。在自然语言处理中,该算法的两个突出实例是用于隐马尔可夫模型的 Baum-Welch 算法和用于概率上下文无关文法的无监督归纳的内外部算法。
 
 
 
 
 
  
 
:<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int  p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} </math>
 
:<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int  p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} </math>
  
EM is frequently used for parameter estimation of mixed models, notably in quantitative genetics.
+
However, this quantity is often intractable since [math]\displaystyle{ \mathbf{Z} }[/math] is unobserved and the distribution of [math]\displaystyle{ \mathbf{Z} }[/math] is unknown before attaining [math]\displaystyle{ \boldsymbol\theta }[/math] .
 
 
EM 经常用于混合模型的参数估计,尤其是在数量遗传学。
 
 
 
 
 
 
 
However, this quantity is often intractable (e.g. if <math>\mathbf{Z}</math> is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult).
 
 
 
In psychometrics, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models.
 
 
 
在心理测量学中,EM 对于估计项目反应理论模型的项目参数和潜在能力几乎是不可或缺的。
 
 
 
  
  
 
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
 
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
  
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.
+
EM算法试图通过迭代应用这两个步骤来找到边际可能性的最大似然估计:
 
 
EM算法试图通过迭代应用这两个步骤来找到边际可能性的最大似然估计:由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的对投资组合进行定价和管理风险的工具。
 
  
 
:''Expectation step (E step)'': Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the [[expected value]] of the log [[likelihood function]] of <math>\boldsymbol\theta</math>, with respect to the current [[conditional probability distribution|conditional distribution]] of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>:
 
:''Expectation step (E step)'': Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the [[expected value]] of the log [[likelihood function]] of <math>\boldsymbol\theta</math>, with respect to the current [[conditional probability distribution|conditional distribution]] of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>:
 
+
:
 
::<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z})  \right] \,</math>
 
::<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z})  \right] \,</math>
 
The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM.
 
 
EM 算法(及其更快变体的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射断层扫描、单光子发射计算机断层扫描和X射线计算机断层扫描中。下面是 EM 的其他更快的变体。
 
 
 
  
 
:''Maximization step (M step)'': Find the parameters that maximize this quantity:
 
:''Maximization step (M step)'': Find the parameters that maximize this quantity:
  
In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis).
+
M步:找到使数量最大化的参数:
 
 
M步:找到使数量最大化的参数:在结构工程中,利用期望最大化(STRIDE)算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)。
 
  
 
::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math>
 
::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math>
 
 
 
 
The typical models to which EM is applied use <math>\mathbf{Z}</math> as a latent variable indicating membership in one of a set of groups:
 
The typical models to which EM is applied use <math>\mathbf{Z}</math> as a latent variable indicating membership in one of a set of groups:
 
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
 
 
卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批状态估计。然而,这些最小方差解需要状态空间模型参数的估计。EM 算法可用于求解联合状态和参数估计问题。
 
  
 
#The observed data points <math>\mathbf{X}</math> may be [[discrete random variable|discrete]] (taking values in a finite or countably infinite set) or [[continuous random variable|continuous]] (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
 
#The observed data points <math>\mathbf{X}</math> may be [[discrete random variable|discrete]] (taking values in a finite or countably infinite set) or [[continuous random variable|continuous]] (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
 
+
#The [[missing values]] (aka [[latent variables]]) <math>\mathbf{Z}</math> are [[discrete random variable|discrete]], drawn from a fixed number of values, and with one latent variable per observed unit.
#The [[missing values]] (aka [[latent variables]]) <math>\mathbf{Z}</math> are [[discrete random variable|discrete]], drawn from a fixed number of values, and with one latent variable per observed unit.  
 
 
 
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
 
 
 
滤波和平滑 EM 算法是通过重复这两个步骤产生的:
 
 
 
 
#The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).
 
#The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).
  
 
However, it is possible to apply EM to other sorts of models.
 
However, it is possible to apply EM to other sorts of models.
  
E-step
 
  
参数是连续的,有两种情况:和所有数据点相关联的参数,或和一个潜在变量的一个具体值相关联的参数(也就是与所有对应有该值的潜在变量的数据点相关联)。然而,EM也可以应用到其他模型中。
+
3. 参数是连续的,有两种情况:和所有数据点相关联的参数,或和一个潜在变量的一个具体值相关联的参数(也就是与所有对应有该值的潜在变量的数据点相关联)。
  
Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
+
然而,EM也可以应用到其他模型中。
 
 
操作一个 Kalman 滤波器或一个最小方差平滑设计与当前的参数估计,以获得更新的状态估计。
 
  
 
The motive is as follows.  If the value of the parameters <math>\boldsymbol\theta</math> is known, usually the value of the latent variables <math>\mathbf{Z}</math> can be found by maximizing the log-likelihood over all possible values of <math>\mathbf{Z}</math>, either simply by iterating over <math>\mathbf{Z}</math> or through an algorithm such as the [[Baum–Welch algorithm]] for [[hidden Markov model]]s.  Conversely, if we know the value of the latent variables <math>\mathbf{Z}</math>, we can find an estimate of the parameters <math>\boldsymbol\theta</math> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group.  This suggests an iterative algorithm, in the case where both <math>\boldsymbol\theta</math> and <math>\mathbf{Z}</math> are unknown:
 
The motive is as follows.  If the value of the parameters <math>\boldsymbol\theta</math> is known, usually the value of the latent variables <math>\mathbf{Z}</math> can be found by maximizing the log-likelihood over all possible values of <math>\mathbf{Z}</math>, either simply by iterating over <math>\mathbf{Z}</math> or through an algorithm such as the [[Baum–Welch algorithm]] for [[hidden Markov model]]s.  Conversely, if we know the value of the latent variables <math>\mathbf{Z}</math>, we can find an estimate of the parameters <math>\boldsymbol\theta</math> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group.  This suggests an iterative algorithm, in the case where both <math>\boldsymbol\theta</math> and <math>\mathbf{Z}</math> are unknown:
  
 
#First, initialize the parameters <math>\boldsymbol\theta</math> to some random values.
 
#First, initialize the parameters <math>\boldsymbol\theta</math> to some random values.
 
M-step
 
 
M-step
 
 
 
#Compute the probability of each possible value of <math>\mathbf{Z}</math> , given <math>\boldsymbol\theta</math>.
 
#Compute the probability of each possible value of <math>\mathbf{Z}</math> , given <math>\boldsymbol\theta</math>.
 
Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
 
 
使用最大似然计算中的滤波或平滑状态估计来获得更新的参数估计。
 
 
 
#Then, use the just-computed values of <math>\mathbf{Z}</math> to compute a better estimate for the parameters <math>\boldsymbol\theta</math>.
 
#Then, use the just-computed values of <math>\mathbf{Z}</math> to compute a better estimate for the parameters <math>\boldsymbol\theta</math>.
 
 
#Iterate steps 2 and 3 until convergence.
 
#Iterate steps 2 and 3 until convergence.
 
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
 
 
假设卡尔曼滤波器或最小方差平滑器对具有附加白噪声的单输入单输出系统进行测量。通过极大似然估计,可以得到更新的测量噪声方差估计
 
  
 
The algorithm as just described monotonically approaches a local minimum of the cost function.
 
The algorithm as just described monotonically approaches a local minimum of the cost function.
  
 
刚刚描述的算法单调地接近成本函数的局部最小值。
 
刚刚描述的算法单调地接近成本函数的局部最小值。
 
<math>\widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2,</math>
 
 
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
 
  
 
== 特性 ==
 
== 特性 ==
  
where <math>\widehat{x}_k</math> are scalar output estimates calculated by a filter or a smoother from N scalar measurements <math>z_k</math>. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
+
Speaking of an expectation (E) step is a bit of a misnomer. What are calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
 
 
其中 < math > widehat { x } _ k </math > 是由过滤器计算的标量输出估计值,或者由 n 个标量测量值计算得到的平滑器。上述更新也可以应用于泊松测量噪声强度的更新。同样,对于一阶自回归过程,更新后的过程噪声方差估计可以通过
 
 
 
Speaking of an expectation (E) step is a bit of a [[misnomer]]. What are calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
 
  
 
说到期望 (E) 步骤有点用词不当。 在第一步中计算的是函数 Q 的固定的、与数据相关的参数。一旦 Q 的参数已知,就可以完全确定并在 EM 算法的第二步 (M) 中最大化。
 
说到期望 (E) 步骤有点用词不当。 在第一步中计算的是函数 Q 的固定的、与数据相关的参数。一旦 Q 的参数已知,就可以完全确定并在 EM 算法的第二步 (M) 中最大化。
  
<math>\widehat{\sigma}^2_w =  \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat_k)}^2,</math>
+
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates ''θ''<sup>(''t'')</sup>), or applying simulated annealing methods.
 
 
2 w = frac {1}{ n } sum { k = 1} ^ n {(widehat { x }{ k + 1}-widehat { f } widehat _ k)} ^ 2,</math >
 
 
 
 
 
 
 
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a [[maximum likelihood estimator]]. For [[bimodal distribution|multimodal distributions]], this means that an EM algorithm may converge to a [[local maximum]] of the observed data likelihood function, depending on starting values. A variety of heuristic or [[metaheuristic]] approaches exist to escape a local maximum, such as random-restart [[hill climbing]] (starting with several different random initial estimates ''θ''<sup>(''t'')</sup>), or applying [[simulated annealing]] methods.
 
  
 
尽管 EM 迭代确实增加了观察到的数据(即边际)似然函数,但不能保证序列收敛到最大似然估计量。 对于多峰分布,这意味着 EM 算法可能会收敛到观察到的数据似然函数的局部最大值,这取决于起始值。 存在各种启发式或元启发式方法来逃避局部最大值,例如随机重启爬山(从几个不同的随机初始估计值 θ(t) 开始),或应用模拟退火方法。
 
尽管 EM 迭代确实增加了观察到的数据(即边际)似然函数,但不能保证序列收敛到最大似然估计量。 对于多峰分布,这意味着 EM 算法可能会收敛到观察到的数据似然函数的局部最大值,这取决于起始值。 存在各种启发式或元启发式方法来逃避局部最大值,例如随机重启爬山(从几个不同的随机初始估计值 θ(t) 开始),或应用模拟退火方法。
  
where <math>\widehat{x}_k</math> and <math>\widehat{x}_{k+1}</math> are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
+
EM is especially useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).
 
 
其中 < math > 和 < math > </math > > > > > > > > x }{ k + 1} </math > 是由过滤器或平滑器计算的标量状态估计。通过对模型参数的估计,得到了修正后的模型参数估计
 
 
 
 
 
 
 
<math>\widehat{F} = \frac{\sum_{k=1}^N (\widehat{x}_{k+1}-\widehat{F} \widehat{x}_k)}{\sum_{k=1}^N \widehat{x}_k^2}.</math>
 
 
 
< math > widehat { f } = frac { sum { k = 1} ^ n (widehat { x }{ k + 1}-widehat { f } widehat { x } _ k)}{ sum { k = 1} ^ n widehat { x } _ k ^ 2}。 </math >
 
 
 
EM is especially useful when the likelihood is an [[exponential family]]: the E step becomes the sum of expectations of [[sufficient statistic]]s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive [[closed-form expression]] updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of [[Per Martin-Löf]] and [[Anders Martin-Löf]]).
 
  
 
当似然是指数族时,EM 尤其有用:E 步成为充分统计量的期望总和,而 M 步涉及最大化线性函数。 在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发布,使用 Per Martin-Löf 和 Anders Martin-Löf 的未发表结果)为每个步骤导出封闭形式的表达式更新。
 
当似然是指数族时,EM 尤其有用:E 步成为充分统计量的期望总和,而 M 步涉及最大化线性函数。 在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发布,使用 Per Martin-Löf 和 Anders Martin-Löf 的未发表结果)为每个步骤导出封闭形式的表达式更新。
  
The convergence of parameter estimates such as those above are well studied.
+
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
 
 
研究了上述参数估计的收敛性。
 
 
 
The EM method was modified to compute [[maximum a posteriori]] (MAP) estimates for [[Bayesian inference]] in the original paper by Dempster, Laird, and Rubin.
 
  
 
在 Dempster、Laird 和 Rubin 的原始论文中,EM 方法被修改为计算贝叶斯推理的最大后验 (MAP) 估计。
 
在 Dempster、Laird 和 Rubin 的原始论文中,EM 方法被修改为计算贝叶斯推理的最大后验 (MAP) 估计。
  
Other methods exist to find maximum likelihood estimates, such as [[gradient descent]], [[conjugate gradient]], or variants of the [[Gauss–Newton algorithm]]. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
+
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
  
 
存在其他方法来寻找最大似然估计,例如梯度下降、共轭梯度或高斯-牛顿算法的变体。 与 EM 不同,此类方法通常需要评估似然函数的一阶和/或二阶导数。
 
存在其他方法来寻找最大似然估计,例如梯度下降、共轭梯度或高斯-牛顿算法的变体。 与 EM 不同,此类方法通常需要评估似然函数的一阶和/或二阶导数。
 
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods.
 
 
针对有时EM 算法收敛速度慢的问题,一些改进方法被提出,如共轭梯度法和修正牛顿法(Newton-Raphson)。此外,EM 还可以与约束估计方法一起使用。
 
  
 
== 正确性证明 ==
 
== 正确性证明 ==
  
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "using a 'covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".
+
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
 
 
参数扩展期望最大化(PX-EM)算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。
 
 
 
Expectation-maximization works to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> rather than directly improving <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math>.  Here is shown that improvements to the former imply improvements to the latter.<ref name="Little1987">{{cite book |last1=Little |first1= Roderick J.A. |author1-link= |last2= Rubin |first2= Donald B. |author2-link= Donald Rubin |title= Statistical Analysis with Missing Data |url=https://archive.org/details/statisticalanaly00litt |url-access=limited | series = Wiley Series in Probability and Mathematical Statistics |year= 1987 |publisher= John Wiley & Sons |location= New York |isbn= 978-0-471-80254-9 |pages= [https://archive.org/details/statisticalanaly00litt/page/n145 134]–136}}</ref>
 
 
 
 
 
 
 
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θ<sub>i</sub> is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.
 
 
 
期望条件最大化(ECM)用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数 θ < sub > i  单独最大化,条件是其他参数保持不变。本身也可以扩展为期望条件最大化(ECME)算法。
 
 
 
For any <math>\mathbf{Z}</math> with non-zero probability <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta)</math>, we can write
 
 
 
: <math>
 
 
 
This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization-maximization procedure section.
 
 
 
这一思想在广义期望最大化(GEM)算法中得到了进一步的推广,该算法只寻求 e 步和 m 步的目标函数 f 的增加,如最大化-最大化过程一节所述。
 
 
 
\log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta).
 
 
 
</math>
 
 
 
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.
 
 
 
也可以考虑将 EM 算法作为 MM (majorize/minize 或 Minorize/Maximize,取决于上下文)算法的子类,因此可以使用在更一般情况下开发的任何机制。
 
 
 
We take the expectation over possible values of the unknown data <math>\mathbf{Z}</math> under the current parameter estimate <math>\theta^{(t)}</math> by multiplying both sides by <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)})</math> and summing (or integrating) over <math>\mathbf{Z}</math>.  The left-hand side is the expectation of a constant, so we get:
 
 
 
: <math>
 
 
 
\begin{align}
 
 
 
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm
 
 
 
EM 算法中使用的 q 函数是基于对数似然的。因此,该算法被称为 log-EM 算法。对数似然的应用可以推广到 α- 对数似然比的应用。然后,利用 α 对数似然比的 q 函数和 α 散度的 q 函数,将观测数据的 α 对数似然比精确地表示为等式。获得这个 q 函数是一个广义的 e 步。它的最大化是一个广义的 m 步。这一对被称为 α-em 算法
 
 
 
\log p(\mathbf{X}\mid\boldsymbol\theta) &
 
 
 
= \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta)
 
 
 
which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
 
 
 
它包含了 log-EM 算法作为其子类。因此,Matsuyama 提出的 α-em 算法是 log-EM 算法的精确推广。不需要计算梯度或 Hessian 矩阵。与 log-EM 算法相比,α-em 算法通过选择合适的 α,具有更快的收敛速度。算法是隐马尔可夫模型估计算法 α-hmm 的一个更快的版本。
 
 
 
- \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\
 
 
 
& = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}),
 
 
 
\end{align}
 
 
 
</math>
 
 
 
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
 
  
 
EM 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了一个关于潜在变量的概率分布估计(在贝叶斯风格)以及 θ 的点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本即给出一个关于θ 和潜在变量的概率分布可能是想要的。贝叶斯推理方法简单地将 θ 作为另一个潜变量来处理。在这个范例中,E 和 M 步骤之间的区别就消失了。如果使用上述因子化 Q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括 θ) 并每次优化一个。现在,每次迭代需要 k 个步骤,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 Q 只依赖于它的马尔可夫包层,所以局部信息传递可以用于有效的推理。
 
EM 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了一个关于潜在变量的概率分布估计(在贝叶斯风格)以及 θ 的点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本即给出一个关于θ 和潜在变量的概率分布可能是想要的。贝叶斯推理方法简单地将 θ 作为另一个潜变量来处理。在这个范例中,E 和 M 步骤之间的区别就消失了。如果使用上述因子化 Q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括 θ) 并每次优化一个。现在,每次迭代需要 k 个步骤,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 Q 只依赖于它的马尔可夫包层,所以局部信息传递可以用于有效的推理。
  
where <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> is defined by the negated sum it is replacing.
+
Expectation-maximization works to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] rather than directly improving [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math]. Here is shown that improvements to the former imply improvements to the latter.
  
This last equation holds for every value of <math>\boldsymbol\theta</math> including <math>\boldsymbol\theta = \boldsymbol\theta^{(t)}</math>,
+
For any [math]\displaystyle{ \mathbf{Z} }[/math] with non-zero probability [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) }[/math], we can write
  
: <math>
+
[math]\displaystyle{ This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization-maximization procedure section. 这一思想在广义期望最大化(GEM)算法中得到了进一步的推广,该算法只寻求 e 步和 m 步的目标函数 f 的增加,如最大化-最大化过程一节所述。 \log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta). }[/math]
  
\log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
+
We take the expectation over possible values of the unknown data [math]\displaystyle{ \mathbf{Z} }[/math] under the current parameter estimate [math]\displaystyle{ \theta^{(t)} }[/math] by multiplying both sides by [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) }[/math] and summing (or integrating) over [math]\displaystyle{ \mathbf{Z} }[/math]. The left-hand side is the expectation of a constant, so we get:
  
In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
+
<nowiki>[math]\displaystyle{ \begin{align} The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm EM 算法中使用的 q 函数是基于对数似然的。因此,该算法被称为 log-EM 算法。对数似然的应用可以推广到 α- 对数似然比的应用。然后,利用 α 对数似然比的 q 函数和 α 散度的 q 函数,将观测数据的 α 对数似然比精确地表示为等式。获得这个 q 函数是一个广义的 e 步。它的最大化是一个广义的 m 步。这一对被称为 α-em 算法 \log p(\mathbf{X}\mid\boldsymbol\theta) & = \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM. 它包含了 log-EM 算法作为其子类。因此,Matsuyama 提出的 α-em 算法是 log-EM 算法的精确推广。不需要计算梯度或 Hessian 矩阵。与 log-EM 算法相比,α-em 算法通过选择合适的 α,具有更快的收敛速度。算法是隐马尔可夫模型估计算法 α-hmm 的一个更快的版本。 - \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\ & = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}), \end{align} }[/math]</nowiki>
  
在信息几何中,e 步和 m 步被解释为双仿射联系下的投影,称为 e 联系和 m 联系; Kullback-Leibler 分歧也可以用这些术语来理解。
+
where [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] is defined by the negated sum it is replacing.
  
= Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}),
+
This last equation holds for every value of [math]\displaystyle{ \boldsymbol\theta }[/math] including [math]\displaystyle{ \boldsymbol\theta = \boldsymbol\theta^{(t)} }[/math],
  
</math>
+
[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms. 在信息几何中,e 步和 m 步被解释为双仿射联系下的投影,称为 e 联系和 m 联系; Kullback-Leibler 分歧也可以用这些术语来理解。 = Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}), }[/math]
  
 
and subtracting this last equation from the previous equation gives
 
and subtracting this last equation from the previous equation gives
  
=== Gaussian mixture === <!--This section is linked from Matrix calculus -->
+
'''Gaussian mixture'''
 +
 
 +
 
  
 
= = = = 高斯混合 = = = = < ! -- 本节链接自矩阵微积分 -- >  
 
= = = = 高斯混合 = = = = < ! -- 本节链接自矩阵微积分 -- >  
  
: <math>
+
[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) k-means and EM on artificial data visualized with ELKI. Using the variances, the EM algorithm can describe the normal distributions exactly, while k-means splits the data in Voronoi-cells. The cluster center is indicated by the lighter, bigger symbol.]] 基于 ELKI 的人工数据的 k 均值和 EM 可视化。EM 算法利用方差能够准确地描述正态分布,而 k- 均值算法则对 voronoi 单元中的数据进行分割。集群中心由较轻,较大的符号表示。] = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}), An animation demonstrating the EM algorithm fitting a two component Gaussian <nowiki>[[mixture model to the Old Faithful dataset. The algorithm steps through from a random initialization to convergence. ]]</nowiki> 一个动画演示的 EM 算法拟合一个双分量高斯[混合模型的老忠实数据集。该算法步骤从一个随机初始化到收敛。] }[/math]
  
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
+
However, Gibbs' inequality tells us that [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) }[/math], so we can conclude that
  
k-means and EM on artificial data visualized with ELKI. Using the variances, the EM algorithm can describe the normal distributions exactly, while k-means splits the data in Voronoi-cells. The cluster center is indicated by the lighter, bigger symbol.]]
+
Let [math]\displaystyle{ \mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n) }[/math] be a sample of [math]\displaystyle{ n }[/math] independent observations from a mixture of two multivariate normal distributions of dimension [math]\displaystyle{ d }[/math], and let [math]\displaystyle{ \mathbf{z} = (z_1,z_2,\ldots,z_n) }[/math] be the latent variables that determine the component from which the observation originates. Special cases of this model include censored or truncated observations from one normal distribution. or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
  
基于 ELKI 的人工数据的 k 均值和 EM 可视化。EM 算法利用方差能够准确地描述正态分布,而 k- 均值算法则对 voronoi 单元中的数据进行分割。集群中心由较轻,较大的符号表示。]
+
设(mathbf { x } = (mathbf { x } _ 1,mathbf { x } _ 2,ldots,mathbf { x } _ n) <nowiki></math ></nowiki> 是来自两个维数 < math > d <nowiki></math ></nowiki> 的多元正态分布的混合物的一个样本,并且设 < math > mathbf { z } = (z _ 1,z _ 2,ldots,z _ n) <nowiki></math ></nowiki> 是决定观测来源的潜在变量。该模型的特殊情况包括截尾或截断的观测值来自一个正态分布。或者所谓的光谱技术。基于矩的概率模型参数学习方法近年来越来越受到人们的关注,因为它们在一定条件下具有全局收敛性等保证。具有学习保证的算法可以推导出一些重要的模型,如混合模型、隐马尔可夫模型等。对于这些谱方法,没有出现伪局部最优,并且在一定的正则性条件下可以一致地估计真实参数。
  
= Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})
+
[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) \ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}). }[/math]
  
+ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}),
+
In words, choosing [math]\displaystyle{ \boldsymbol\theta }[/math] to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] causes [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math] to improve at least as much.
  
An animation demonstrating the EM algorithm fitting a two component Gaussian [[mixture model to the Old Faithful dataset. The algorithm steps through from a random initialization to convergence. ]]
 
  
一个动画演示的 EM 算法拟合一个双分量高斯[混合模型的老忠实数据集。该算法步骤从一个随机初始化到收敛。]
 
  
</math>
 
  
However, [[Gibbs' inequality]] tells us that <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})</math>, so we can conclude that
+
== 作为最大化-最大化过程 ==
  
Let <math>\mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n)</math> be a sample of <math>n</math> independent observations from a mixture of two multivariate normal distributions of dimension <math>d</math>, and let <math>\mathbf{z} = (z_1,z_2,\ldots,z_n)</math> be the latent variables that determine the component from which the observation originates.  Special cases of this model include censored or truncated observations from one normal distribution. or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
+
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent. Consider the function:
  
设(mathbf { x } = (mathbf { x } _ 1,mathbf { x } _ 2,ldots,mathbf { x } _ n) </math > 是来自两个维数 < math > d </math > 的多元正态分布的混合物的一个样本,并且设 < math > mathbf { z } = (z _ 1,z _ 2,ldots,z _ n) </math > 是决定观测来源的潜在变量。该模型的特殊情况包括截尾或截断的观测值来自一个正态分布。或者所谓的光谱技术。基于矩的概率模型参数学习方法近年来越来越受到人们的关注,因为它们在一定条件下具有全局收敛性等保证。具有学习保证的算法可以推导出一些重要的模型,如混合模型、隐马尔可夫模型等。对于这些谱方法,没有出现伪局部最优,并且在一定的正则性条件下可以一致地估计真实参数。
+
EM算法可以看作是两个交替的最大化步骤,即作为坐标下降法的一个例子。 考虑函数:
  
: <math>
+
[math]\displaystyle{ F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), }[/math]
  
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
+
where ''q'' is an arbitrary probability distribution over the unobserved data ''z'' and ''H(q)'' is the entropy of the distribution ''q''. This function can be written as
  
\ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}).
+
q 是未观测数据 z 上的任意概率分布,H(q) 是分布 q 的熵。 这个函数可以写成
  
</math>
+
<nowiki>[math]\displaystyle{ F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), }[/math]</nowiki>
 
 
In words, choosing <math>\boldsymbol\theta</math> to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> causes <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math> to improve at least as much.
 
 
 
 
 
 
 
== 作为最大化-最大化过程 ==
 
 
 
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of [[coordinate descent]].<ref name="neal1999">{{cite book|last1=Neal |first=Radford |last2=Hinton |first2=Geoffrey |author-link2=Geoffrey Hinton |title=A view of the EM algorithm that justifies incremental, sparse, and other variants |journal=Learning in Graphical Models |editor=[[Michael I. Jordan]] |pages= 355–368 |publisher= MIT Press |location=Cambridge, MA |year=1999 |isbn=978-0-262-60032-3 |url=ftp://ftp.cs.toronto.edu/pub/radford/emk.pdf |accessdate=2009-03-22}}</ref><ref name="hastie2001">{{cite book|last1=Hastie |first1= Trevor|author-link1=Trevor Hastie|last2=Tibshirani|first2=Robert|author-link2=Robert Tibshirani|last3=Friedman|first3=Jerome |year=2001 |title=The Elements of Statistical Learning |url=https://archive.org/details/elementsstatisti00thas_842 |url-access=limited |isbn=978-0-387-95284-0 |publisher=Springer |location=New York |chapter=8.5 The EM algorithm |pages=[https://archive.org/details/elementsstatisti00thas_842/page/n237 236]–243}}</ref> Consider the function:
 
 
 
:<math> F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), </math>
 
 
 
where ''q'' is an arbitrary probability distribution over the unobserved data ''z'' and ''H(q)'' is the [[Entropy (information theory)|entropy]] of the distribution ''q''. This function can be written as
 
 
 
:<math> F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), </math>
 
 
 
where  <math>p_{Z\mid X}(\cdot\mid x;\theta )</math> is the conditional distribution of the unobserved data given the observed data <math>x</math> and <math>D_{KL}</math> is the [[Kullback–Leibler divergence]].
 
  
 +
where [math]\displaystyle{ p_{Z\mid X}(\cdot\mid x;\theta ) }[/math] is the conditional distribution of the unobserved data given the observed data [math]\displaystyle{ x }[/math] and [math]\displaystyle{ D_{KL} }[/math] is the Kullback–Leibler divergence.
  
  
 
Then the steps in the EM algorithm may be viewed as:
 
Then the steps in the EM algorithm may be viewed as:
  
:''Expectation step'': Choose <math>q</math> to maximize <math>F</math>:
+
那么EM算法的步骤可以看成:
 
 
::<math> q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) </math>
 
 
 
:''Maximization step'': Choose <math>\theta</math> to maximize <math>F</math>:
 
 
 
|last1 = Bishop |first1 = Christopher M.
 
 
 
1 = Bishop | first1 = Christopher m.
 
 
 
::<math> \theta^{(t+1)} = \operatorname{arg\,max}_\theta \ F(q^{(t)},\theta) </math>
 
 
 
|author-link = Christopher Bishop
 
 
 
克里斯托弗 · 毕晓普
 
  
 +
''Expectation step'': Choose [math]\displaystyle{ q }[/math] to maximize [math]\displaystyle{ F }[/math]:
  
 +
期望步:选择q最大化F:
  
|title = Pattern Recognition and Machine Learning
+
[math]\displaystyle{ q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) }[/math]
  
| title = 模式识别和机器学习
+
''Maximization step'': Choose [math]\displaystyle{ \theta }[/math] to maximize [math]\displaystyle{ F }[/math]:
  
== Applications ==
+
最大化步:选择θ最大化F
 
+
:
|year = 2006
+
== 应用 ==
 
 
2006年
 
  
 
EM is frequently used for [[data clustering]] in [[machine learning]] and [[computer vision]]. In [[natural language processing]], two prominent instances of the algorithm are the [[Baum–Welch algorithm]] for [[hidden Markov models]], and the [[inside-outside algorithm]] for unsupervised induction of [[probabilistic context-free grammar]]s.
 
EM is frequently used for [[data clustering]] in [[machine learning]] and [[computer vision]]. In [[natural language processing]], two prominent instances of the algorithm are the [[Baum–Welch algorithm]] for [[hidden Markov models]], and the [[inside-outside algorithm]] for unsupervised induction of [[probabilistic context-free grammar]]s.
  
|publisher = Springer
+
EM is frequently used for parameter estimation of [[mixed model]]s, notably in [[quantitative genetics]].
 
 
| publisher = Springer
 
 
 
 
 
 
 
|ref = CITEREFBishop2006
 
 
 
2006
 
 
 
EM is frequently used for parameter estimation of [[mixed model]]s,<ref>{{cite journal |doi=10.1080/01621459.1988.10478693 |title=Newton—Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data |journal=Journal of the American Statistical Association |volume=83 |issue=404 |pages=1014 |year=1988 |last1=Lindstrom |first1=Mary J |last2=Bates |first2=Douglas M }}</ref><ref>{{cite journal |doi=10.2307/1390614 |jstor=1390614 |title=Fitting Mixed-Effects Models Using Efficient EM-Type Algorithms |journal=Journal of Computational and Graphical Statistics |volume=9 |issue=1 |pages=78–98 |year=2000 |last1=Van Dyk |first1=David A }}</ref> notably in [[quantitative genetics]].<ref>{{cite journal |doi=10.1111/anzs.12208 |title=A new REML (parameter expanded) EM algorithm for linear mixed models |journal=Australian & New Zealand Journal of Statistics |volume=59 |issue=4 |pages=433 |year=2017 |last1=Diffey |first1=S. M |last2=Smith |first2=A. B |last3=Welsh |first3=A. H |last4=Cullis |first4=B. R |doi-access=free }}</ref>
 
 
 
|isbn = 978-0-387-31073-2
 
 
 
| isbn = 978-0-387-31073-2
 
 
 
 
 
 
 
}}
 
 
 
}}
 
  
 
In [[psychometrics]], EM is almost indispensable for estimating item parameters and latent abilities of [[item response theory]] models.
 
In [[psychometrics]], EM is almost indispensable for estimating item parameters and latent abilities of [[item response theory]] models.
  
 
+
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.
 
 
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.{{Citation needed|date=November 2017}}
 
 
 
 
 
  
 
The EM algorithm (and its faster variant [[ordered subset expectation maximization]]) is also widely used in [[medical imaging|medical image]] reconstruction, especially in [[positron emission tomography]], [[single photon emission computed tomography]], and x-ray [[computed tomography]]. See below for other faster variants of EM.
 
The EM algorithm (and its faster variant [[ordered subset expectation maximization]]) is also widely used in [[medical imaging|medical image]] reconstruction, especially in [[positron emission tomography]], [[single photon emission computed tomography]], and x-ray [[computed tomography]]. See below for other faster variants of EM.
  
 +
In [[structural engineering]], the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see [[Operational Modal Analysis]]).
  
 +
EM 经常用于机器学习和计算机视觉中的数据聚类。 在自然语言处理中,该算法的两个突出实例是用于隐马尔可夫模型的 Baum-Welch 算法和用于无监督概率上下文无关文法归纳的内-外算法。EM 经常用于混合模型的参数估计,特别是在数量遗传学中。
  
In [[structural engineering]], the Structural Identification using Expectation Maximization (STRIDE)<ref>Matarazzo, T. J., and Pakzad, S. N. (2016). “STRIDE for Structural Identification using Expectation Maximization: Iterative Output-Only Method for Modal Identification.” Journal of Engineering Mechanics.http://ascelibrary.org/doi/abs/10.1061/(ASCE)EM.1943-7889.0000951</ref> algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see [[Operational Modal Analysis]]).
+
在心理测量学中,EM对于估计项目响应理论模型的项目参数和潜在能力几乎是必不可少的。
  
 +
由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的对投资组合进行定价和管理风险的工具。
  
 +
EM 算法(及其更快变体的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射断层扫描、单光子发射计算机断层扫描和X射线计算机断层扫描中。下面是 EM 的其他更快的变体。
  
== Filtering and smoothing EM algorithms ==
+
在结构工程中,利用期望最大化(STRIDE)算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)。
  
A [[Kalman filter]] is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
+
== 滤波和平滑EM算法 ==
  
 +
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
  
 +
卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批状态估计。然而,这些最小方差解需要状态空间模型参数的估计。EM 算法可用于求解联合状态和参数估计问题。
  
 
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
 
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
  
 +
滤波和平滑 EM 算法是通过重复这两个步骤产生的:
  
 +
E-step
  
;E-step
+
Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
  
: Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
+
操作一个 Kalman 滤波器或一个最小方差平滑设计与当前的参数估计,以获得更新的状态估计。
  
 +
M-step
  
 +
Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
  
Category:Estimation methods
+
使用最大似然计算中的滤波或平滑状态估计来获得更新的参数估计。
  
类别: 估算方法
+
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
  
;M-step
+
[math]\displaystyle{ \widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2, }[/math]
  
Category:Machine learning algorithms
+
假设卡尔曼滤波器或最小方差平滑器对具有附加白噪声的单输入单输出系统进行测量。通过极大似然估计,可以得到更新的测量噪声方差估计
  
类别: 机器学习算法
+
where [math]\displaystyle{ \widehat{x}_k }[/math] are scalar output estimates calculated by a filter or a smoother from N scalar measurements [math]\displaystyle{ z_k }[/math]. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
  
: Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
+
其中 < math > widehat { x } _ k <nowiki></math ></nowiki> 是由过滤器计算的标量输出估计值,或者由 n 个标量测量值计算得到的平滑器。上述更新也可以应用于泊松测量噪声强度的更新。同样,对于一阶自回归过程,更新后的过程噪声方差估计可以通过
  
Category:Missing data
+
where [math]\displaystyle{ \widehat{x}_k }[/math] and [math]\displaystyle{ \widehat{x}_{k+1} }[/math] are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
  
类别: 缺失数据
+
其中 < math > 和 < math > <nowiki></math ></nowiki> > > > > > > > x }{ k + 1} <nowiki></math ></nowiki> 是由过滤器或平滑器计算的标量状态估计。通过对模型参数的估计,得到了修正后的模型参数估计
  
 +
The convergence of parameter estimates such as those above are well studied.
  
 +
研究了上述参数估计的收敛性。
  
Category:Statistical algorithms
+
== 变体 ==
 +
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods.
  
类别: 统计算法
+
针对有时EM 算法收敛速度慢的问题,一些改进方法被提出,如共轭梯度法和修正牛顿法(Newton-Raphson)。此外,EM 还可以与约束估计方法一起使用。
  
Suppose that a [[Kalman filter]] or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the [[maximum likelihood]] calculation
+
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "using a 'covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".
  
Category:Optimization algorithms and methods
+
参数扩展期望最大化(PX-EM)算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。
  
类别: 优化算法和方法
+
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θ<sub>i</sub> is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.
  
: <math>\widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2,</math>
+
期望条件最大化(ECM)用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数 θ < sub > i 单独最大化,条件是其他参数保持不变。本身也可以扩展为期望条件最大化(ECME)算法。
  
Category:Cluster analysis algorithms
+
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.
  
类别: 数据聚类算法
+
也可以考虑将 EM 算法作为 MM (majorize/minize 或 Minorize/Maximize,取决于上下文)算法的子类,因此可以使用在更一般情况下开发的任何机制。
  
<noinclude>
+
'''α-EM算法'''
  
<small>This page was moved from [[wikipedia:en:Expectation–maximization algorithm]]. Its edit history can be viewed at [[EM算法/edithistory]]</small></noinclude>
+
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
  
 +
<small>This page was moved from [[wikipedia:en:Expectation–maximization algorithm]]. Its edit history can be viewed at [[EM算法/edithistory]]</small>
 
[[Category:待整理页面]]
 
[[Category:待整理页面]]

2021年12月22日 (三) 19:47的版本

本词条由Xugami初步翻译

此词条暂由彩云小译翻译,翻译字数共1877,未经人工整理和审校,带来阅读不便,请见谅。


In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

在统计学中,期望最大化(EM)算法是一种寻找统计模型中(局部)极大似然或者最大后验(MAP)参数估计的迭代方法,其中的统计模型依赖于未观测到的潜在变量。EM迭代过程中交替执行期望(E)步和最大化(M)步;前者使用当前参数估计值建立对数似然函数的期望函数,后者计算能够最大化E步中获得的期望对数似然函数的参数。这些参数估计值将用于确定下一个E步中潜在变量的分布。



Old Faithful火山喷发数据]的 EM 聚类。随机初始模型(由于轴的不同尺度,看起来是两个非常平和宽的区域)可以拟合观测的数据。在第一次迭代中,模型发生了实质性的变化,但随后收敛到间歇泉的两个模态。可视化使用 ELKI.



历史

The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers following his collaboration with Per Martin-Löf and Anders Martin-Löf.The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.

The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983. Wu's proof established the EM method's convergence outside of the exponential family, as claimed by Dempster–Laird–Rubin.



Arthur Dempster, Nan Laird, and Donald Rubin于1977年发表的一篇经典的论文中解释和命名了EM算法。他们指出该方法被早期作者“多次在特殊条件下提出”。Cedric Smith提出的估计等位基因频率的基因计数法便是其中之一。基于与Per Martin-Löf和Anders Martin-Löf的合作,Rolf Sundberg在他的学位论文和若干论文中详述了针对指数族的EM方法。1977年Dempster–Laird–Rubin的论文推广了该方法并针对更为广泛的问题进行了收敛性分析,该论文确立EM方法作为统计分析的重要工具。 Dempster–Laird–Rubin算法的收敛分析存在缺陷,C. F. Jeff Wu在1983年发表了一项修正的收敛性分析。Dempster–Laird–Rubin称,Wu的工作建立了EM方法在指数族之外的收敛。


介绍

The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.

Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.

The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.

EM 算法用于在无法直接求解方程的情况下查找统计模型的(局部)最大似然参数。通常,除了未知参数和已知数据观察之外,这些模型还涉及潜在变量。也就是说,要么数据中存在缺失值,要么可以通过假设存在更多未观察到的数据点来更简单地制定模型。例如,通过假设每个观察到的数据点都有一个对应的未观察到的数据点或潜在变量,指定每个数据点所属的混合成分,可以更简单地描述混合模型。

寻找最大似然解通常需要对所有未知值、参数和潜在变量求似然函数的导数,并同时求解所得方程。在具有潜在变量的统计模型中,这通常是不可能的。相反,结果通常是一组互锁方程,其中参数的解需要潜在变量的值,反之亦然,但将一组方程代入另一组会产生无法求解的方程。

EM 算法从观察开始到找到方法可以数值求解这两组方程。可以简单地为两组未知数中的一组选择任意值,使用它们来估计第二组,然后使用这些新值来找到对第一组更好的估计,然后在两者之间保持交替,直到结果值都收敛到固定点。这并不明显,但可以证明在这种情况下它确实有效,并且可能性的导数在该点(任意接近)为零,这反过来意味着该点是最大值或一个鞍点。通常,可能会出现多个最大值,但不能保证会找到全局最大值。一些可能性也有奇点,即无意义的最大值。例如,EM 在混合模型中可能找到的解决方案之一包含把其中一个成分设置为具有零方差,并且相同成分的平均参数和一个数据点等价。

描述

Given the statistical model which generates a set [math]\displaystyle{ \mathbf{X} }[/math] of observed data, a set of unobserved latent data or missing values [math]\displaystyle{ \mathbf{Z} }[/math], and a vector of unknown parameters [math]\displaystyle{ \boldsymbol\theta }[/math], along with a likelihood function [math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta) }[/math], the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data.

[math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} }[/math]

However, this quantity is often intractable since [math]\displaystyle{ \mathbf{Z} }[/math] is unobserved and the distribution of [math]\displaystyle{ \mathbf{Z} }[/math] is unknown before attaining [math]\displaystyle{ \boldsymbol\theta }[/math] .


The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:

EM算法试图通过迭代应用这两个步骤来找到边际可能性的最大似然估计:

Expectation step (E step): Define [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] as the expected value of the log likelihood function of [math]\displaystyle{ \boldsymbol\theta }[/math], with respect to the current conditional distribution of [math]\displaystyle{ \mathbf{Z} }[/math] given [math]\displaystyle{ \mathbf{X} }[/math] and the current estimates of the parameters [math]\displaystyle{ \boldsymbol\theta^{(t)} }[/math]:
[math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z}) \right] \, }[/math]
Maximization step (M step): Find the parameters that maximize this quantity:

M步:找到使数量最大化的参数:

[math]\displaystyle{ \boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, }[/math]

The typical models to which EM is applied use [math]\displaystyle{ \mathbf{Z} }[/math] as a latent variable indicating membership in one of a set of groups:

  1. The observed data points [math]\displaystyle{ \mathbf{X} }[/math] may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
  2. The missing values (aka latent variables) [math]\displaystyle{ \mathbf{Z} }[/math] are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.
  3. The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).

However, it is possible to apply EM to other sorts of models.


3. 参数是连续的,有两种情况:和所有数据点相关联的参数,或和一个潜在变量的一个具体值相关联的参数(也就是与所有对应有该值的潜在变量的数据点相关联)。

然而,EM也可以应用到其他模型中。

The motive is as follows. If the value of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] is known, usually the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math] can be found by maximizing the log-likelihood over all possible values of [math]\displaystyle{ \mathbf{Z} }[/math], either simply by iterating over [math]\displaystyle{ \mathbf{Z} }[/math] or through an algorithm such as the Baum–Welch algorithm for hidden Markov models. Conversely, if we know the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math], we can find an estimate of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both [math]\displaystyle{ \boldsymbol\theta }[/math] and [math]\displaystyle{ \mathbf{Z} }[/math] are unknown:

  1. First, initialize the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] to some random values.
  2. Compute the probability of each possible value of [math]\displaystyle{ \mathbf{Z} }[/math] , given [math]\displaystyle{ \boldsymbol\theta }[/math].
  3. Then, use the just-computed values of [math]\displaystyle{ \mathbf{Z} }[/math] to compute a better estimate for the parameters [math]\displaystyle{ \boldsymbol\theta }[/math].
  4. Iterate steps 2 and 3 until convergence.

The algorithm as just described monotonically approaches a local minimum of the cost function.

刚刚描述的算法单调地接近成本函数的局部最小值。

特性

Speaking of an expectation (E) step is a bit of a misnomer. What are calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.

说到期望 (E) 步骤有点用词不当。 在第一步中计算的是函数 Q 的固定的、与数据相关的参数。一旦 Q 的参数已知,就可以完全确定并在 EM 算法的第二步 (M) 中最大化。

Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates θ(t)), or applying simulated annealing methods.

尽管 EM 迭代确实增加了观察到的数据(即边际)似然函数,但不能保证序列收敛到最大似然估计量。 对于多峰分布,这意味着 EM 算法可能会收敛到观察到的数据似然函数的局部最大值,这取决于起始值。 存在各种启发式或元启发式方法来逃避局部最大值,例如随机重启爬山(从几个不同的随机初始估计值 θ(t) 开始),或应用模拟退火方法。

EM is especially useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).

当似然是指数族时,EM 尤其有用:E 步成为充分统计量的期望总和,而 M 步涉及最大化线性函数。 在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发布,使用 Per Martin-Löf 和 Anders Martin-Löf 的未发表结果)为每个步骤导出封闭形式的表达式更新。

The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.

在 Dempster、Laird 和 Rubin 的原始论文中,EM 方法被修改为计算贝叶斯推理的最大后验 (MAP) 估计。

Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.

存在其他方法来寻找最大似然估计,例如梯度下降、共轭梯度或高斯-牛顿算法的变体。 与 EM 不同,此类方法通常需要评估似然函数的一阶和/或二阶导数。

正确性证明

EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.

EM 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了一个关于潜在变量的概率分布估计(在贝叶斯风格)以及 θ 的点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本即给出一个关于θ 和潜在变量的概率分布可能是想要的。贝叶斯推理方法简单地将 θ 作为另一个潜变量来处理。在这个范例中,E 和 M 步骤之间的区别就消失了。如果使用上述因子化 Q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括 θ) 并每次优化一个。现在,每次迭代需要 k 个步骤,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 Q 只依赖于它的马尔可夫包层,所以局部信息传递可以用于有效的推理。

Expectation-maximization works to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] rather than directly improving [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math]. Here is shown that improvements to the former imply improvements to the latter.

For any [math]\displaystyle{ \mathbf{Z} }[/math] with non-zero probability [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) }[/math], we can write

[math]\displaystyle{ This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization-maximization procedure section. 这一思想在广义期望最大化(GEM)算法中得到了进一步的推广,该算法只寻求 e 步和 m 步的目标函数 f 的增加,如最大化-最大化过程一节所述。 \log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta). }[/math]

We take the expectation over possible values of the unknown data [math]\displaystyle{ \mathbf{Z} }[/math] under the current parameter estimate [math]\displaystyle{ \theta^{(t)} }[/math] by multiplying both sides by [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) }[/math] and summing (or integrating) over [math]\displaystyle{ \mathbf{Z} }[/math]. The left-hand side is the expectation of a constant, so we get:

[math]\displaystyle{ \begin{align} The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm EM 算法中使用的 q 函数是基于对数似然的。因此,该算法被称为 log-EM 算法。对数似然的应用可以推广到 α- 对数似然比的应用。然后,利用 α 对数似然比的 q 函数和 α 散度的 q 函数,将观测数据的 α 对数似然比精确地表示为等式。获得这个 q 函数是一个广义的 e 步。它的最大化是一个广义的 m 步。这一对被称为 α-em 算法 \log p(\mathbf{X}\mid\boldsymbol\theta) & = \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM. 它包含了 log-EM 算法作为其子类。因此,Matsuyama 提出的 α-em 算法是 log-EM 算法的精确推广。不需要计算梯度或 Hessian 矩阵。与 log-EM 算法相比,α-em 算法通过选择合适的 α,具有更快的收敛速度。算法是隐马尔可夫模型估计算法 α-hmm 的一个更快的版本。 - \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\ & = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}), \end{align} }[/math]

where [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] is defined by the negated sum it is replacing.

This last equation holds for every value of [math]\displaystyle{ \boldsymbol\theta }[/math] including [math]\displaystyle{ \boldsymbol\theta = \boldsymbol\theta^{(t)} }[/math],

[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms. 在信息几何中,e 步和 m 步被解释为双仿射联系下的投影,称为 e 联系和 m 联系; Kullback-Leibler 分歧也可以用这些术语来理解。 = Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}), }[/math]

and subtracting this last equation from the previous equation gives

Gaussian mixture

 

= = = = 高斯混合 = = = = < ! -- 本节链接自矩阵微积分 -- >

[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) k-means and EM on artificial data visualized with ELKI. Using the variances, the EM algorithm can describe the normal distributions exactly, while k-means splits the data in Voronoi-cells. The cluster center is indicated by the lighter, bigger symbol.]] 基于 ELKI 的人工数据的 k 均值和 EM 可视化。EM 算法利用方差能够准确地描述正态分布,而 k- 均值算法则对 voronoi 单元中的数据进行分割。集群中心由较轻,较大的符号表示。] = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}), An animation demonstrating the EM algorithm fitting a two component Gaussian [[mixture model to the Old Faithful dataset. The algorithm steps through from a random initialization to convergence. ]] 一个动画演示的 EM 算法拟合一个双分量高斯[混合模型的老忠实数据集。该算法步骤从一个随机初始化到收敛。] }[/math]

However, Gibbs' inequality tells us that [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) }[/math], so we can conclude that

Let [math]\displaystyle{ \mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n) }[/math] be a sample of [math]\displaystyle{ n }[/math] independent observations from a mixture of two multivariate normal distributions of dimension [math]\displaystyle{ d }[/math], and let [math]\displaystyle{ \mathbf{z} = (z_1,z_2,\ldots,z_n) }[/math] be the latent variables that determine the component from which the observation originates. Special cases of this model include censored or truncated observations from one normal distribution. or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.

设(mathbf { x } = (mathbf { x } _ 1,mathbf { x } _ 2,ldots,mathbf { x } _ n) </math > 是来自两个维数 < math > d </math > 的多元正态分布的混合物的一个样本,并且设 < math > mathbf { z } = (z _ 1,z _ 2,ldots,z _ n) </math > 是决定观测来源的潜在变量。该模型的特殊情况包括截尾或截断的观测值来自一个正态分布。或者所谓的光谱技术。基于矩的概率模型参数学习方法近年来越来越受到人们的关注,因为它们在一定条件下具有全局收敛性等保证。具有学习保证的算法可以推导出一些重要的模型,如混合模型、隐马尔可夫模型等。对于这些谱方法,没有出现伪局部最优,并且在一定的正则性条件下可以一致地估计真实参数。

[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) \ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}). }[/math]

In words, choosing [math]\displaystyle{ \boldsymbol\theta }[/math] to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] causes [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math] to improve at least as much.



作为最大化-最大化过程

The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent. Consider the function:

EM算法可以看作是两个交替的最大化步骤,即作为坐标下降法的一个例子。 考虑函数:

[math]\displaystyle{ F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), }[/math]

where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as

q 是未观测数据 z 上的任意概率分布,H(q) 是分布 q 的熵。 这个函数可以写成

[math]\displaystyle{ F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), }[/math]

where [math]\displaystyle{ p_{Z\mid X}(\cdot\mid x;\theta ) }[/math] is the conditional distribution of the unobserved data given the observed data [math]\displaystyle{ x }[/math] and [math]\displaystyle{ D_{KL} }[/math] is the Kullback–Leibler divergence.


Then the steps in the EM algorithm may be viewed as:

那么EM算法的步骤可以看成:

Expectation step: Choose [math]\displaystyle{ q }[/math] to maximize [math]\displaystyle{ F }[/math]:

期望步:选择q最大化F:

[math]\displaystyle{ q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) }[/math]

Maximization step: Choose [math]\displaystyle{ \theta }[/math] to maximize [math]\displaystyle{ F }[/math]:

最大化步:选择θ最大化F

应用

EM is frequently used for data clustering in machine learning and computer vision. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars.

EM is frequently used for parameter estimation of mixed models, notably in quantitative genetics.

In psychometrics, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models.

With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.

The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM.

In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis).

EM 经常用于机器学习和计算机视觉中的数据聚类。 在自然语言处理中,该算法的两个突出实例是用于隐马尔可夫模型的 Baum-Welch 算法和用于无监督概率上下文无关文法归纳的内-外算法。EM 经常用于混合模型的参数估计,特别是在数量遗传学中。

在心理测量学中,EM对于估计项目响应理论模型的项目参数和潜在能力几乎是必不可少的。

由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的对投资组合进行定价和管理风险的工具。

EM 算法(及其更快变体的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射断层扫描、单光子发射计算机断层扫描和X射线计算机断层扫描中。下面是 EM 的其他更快的变体。

在结构工程中,利用期望最大化(STRIDE)算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)。

滤波和平滑EM算法

A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.

卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批状态估计。然而,这些最小方差解需要状态空间模型参数的估计。EM 算法可用于求解联合状态和参数估计问题。

Filtering and smoothing EM algorithms arise by repeating this two-step procedure:

滤波和平滑 EM 算法是通过重复这两个步骤产生的:

E-step

Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.

操作一个 Kalman 滤波器或一个最小方差平滑设计与当前的参数估计,以获得更新的状态估计。

M-step

Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.

使用最大似然计算中的滤波或平滑状态估计来获得更新的参数估计。

Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation

[math]\displaystyle{ \widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2, }[/math]

假设卡尔曼滤波器或最小方差平滑器对具有附加白噪声的单输入单输出系统进行测量。通过极大似然估计,可以得到更新的测量噪声方差估计

where [math]\displaystyle{ \widehat{x}_k }[/math] are scalar output estimates calculated by a filter or a smoother from N scalar measurements [math]\displaystyle{ z_k }[/math]. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by

其中 < math > widehat { x } _ k </math > 是由过滤器计算的标量输出估计值,或者由 n 个标量测量值计算得到的平滑器。上述更新也可以应用于泊松测量噪声强度的更新。同样,对于一阶自回归过程,更新后的过程噪声方差估计可以通过

where [math]\displaystyle{ \widehat{x}_k }[/math] and [math]\displaystyle{ \widehat{x}_{k+1} }[/math] are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via

其中 < math > 和 < math > </math > > > > > > > > x }{ k + 1} </math > 是由过滤器或平滑器计算的标量状态估计。通过对模型参数的估计,得到了修正后的模型参数估计

The convergence of parameter estimates such as those above are well studied.

研究了上述参数估计的收敛性。

变体

A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods.

针对有时EM 算法收敛速度慢的问题,一些改进方法被提出,如共轭梯度法和修正牛顿法(Newton-Raphson)。此外,EM 还可以与约束估计方法一起使用。

Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "using a 'covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".

参数扩展期望最大化(PX-EM)算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。

Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.

期望条件最大化(ECM)用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数 θ < sub > i 单独最大化,条件是其他参数保持不变。本身也可以扩展为期望条件最大化(ECME)算法。

It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.

也可以考虑将 EM 算法作为 MM (majorize/minize 或 Minorize/Maximize,取决于上下文)算法的子类,因此可以使用在更一般情况下开发的任何机制。

α-EM算法

The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.

This page was moved from wikipedia:en:Expectation–maximization algorithm. Its edit history can be viewed at EM算法/edithistory