更改

删除53,484字节 、 2020年10月25日 (日) 17:43
第1行: 第1行: −
此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。
+
此词条暂由彩云小译翻译,翻译字数共2199,未经人工整理和审校,带来阅读不便,请见谅。
    
{{short description|Iterative method for finding maximum likelihood estimates in statistical models}}
 
{{short description|Iterative method for finding maximum likelihood estimates in statistical models}}
第7行: 第7行:       −
In [[statistics]], an '''expectation–maximization''' ('''EM''') '''algorithm''' is an [[iterative method]] to find [[maximum likelihood]] or [[maximum a posteriori]] (MAP) estimates of [[parameter]]s in [[statistical model]]s, where the model depends on unobserved [[latent variable]]s. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the [[Likelihood function#Log-likelihood|log-likelihood]] evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the ''E'' step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
+
In [[statistics]], an '''expectation–maximization''' ('''EM''') '''algorithm''' is an [[iterative method]] to find (local) [[maximum likelihood]] or [[maximum a posteriori]] (MAP) estimates of [[parameter]]s in [[statistical model]]s, where the model depends on unobserved [[latent variable]]s. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the [[Likelihood function#Log-likelihood|log-likelihood]] evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the ''E'' step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
   −
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
+
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
   −
在统计学中,期望最大化(EM)算法是一种在统计模型中寻找参数的最大似然或最大后验估计(MAP)的迭代法,其中模型依赖于未观测的潜变量。Em 迭代在执行期望(e)步和最大化(m)步之间交替进行,前者为使用当前参数估计计算的对数似然的期望创建一个函数,后者计算参数最大化在 e 步中找到的期望对数似然。这些参数估计然后用来确定潜变量的分布在下一步 e。
+
在统计学中,期望最大化(EM)算法是一种在统计模型中寻找(局部)最大似然或最大后验(MAP)估计的迭代法,其中模型依赖于未观测的潜在变量。EM 迭代在执行期望(e)步和最大化(m)步之间交替进行,前者为使用当前参数估计计算的对数似然的期望创建一个函数,后者计算参数最大化在 e 步中找到的期望对数似然。这些参数估计然后用来确定潜变量的分布在下一步 e。
      第27行: 第27行:  
The EM algorithm was explained and given its name in a classic 1977 paper by [[Arthur P. Dempster|Arthur Dempster]], [[Nan Laird]], and [[Donald Rubin]].<ref name="Dempster1977">
 
The EM algorithm was explained and given its name in a classic 1977 paper by [[Arthur P. Dempster|Arthur Dempster]], [[Nan Laird]], and [[Donald Rubin]].<ref name="Dempster1977">
   −
The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin.<ref name="Dempster1977">
+
The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin.
   −
算法在1977年由 Arthur Dempster,Nan Laird 和 Donald Rubin 发表的一篇经典论文中得到了解释和命名。 1977”
+
算法在1977年由 Arthur Dempster,Nan Laird 和 Donald Rubin 发表的一篇经典论文中得到了解释和命名。
    
{{cite journal
 
{{cite journal
  −
{{cite journal
  −
  −
{引用期刊
  −
  −
|last1=Dempster  |first1= A.P. |author-link1=Arthur P. Dempster
      
  |last1=Dempster  |first1= A.P. |author-link1=Arthur P. Dempster
 
  |last1=Dempster  |first1= A.P. |author-link1=Arthur P. Dempster
  −
1 Dempster | first1 a.p.| 作者-链接1 Arthur p. Dempster
      
  |last2=Laird |first2=N.M. |author-link2=Nan Laird |last3=Rubin
 
  |last2=Laird |first2=N.M. |author-link2=Nan Laird |last3=Rubin
   −
  |last2=Laird |first2=N.M. |author-link2=Nan Laird |last3=Rubin
+
The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
 
  −
2 Laird | first2 n.m.2 Nan Laird | last3 Rubin
     −
|first3=D.B. |author-link3=Donald Rubin
+
EM 算法用于在方程不能直接求解的情况下寻找统计模型的(局部)最大似然参数。通常这些模型除了未知参数和已知数据观测值外,还涉及潜变量。也就是说,要么数据中存在缺失值,要么通过假设存在更多未观测到的数据点,可以更简单地建立模型。例如,通过假设每个观测数据点都有一个对应的未观测数据点或潜变量,并指定每个数据点所属的混合成分,可以更简单地描述混合模型。
    
  |first3=D.B. |author-link3=Donald Rubin
 
  |first3=D.B. |author-link3=Donald Rubin
  −
| first3 d. b.作者: 唐纳德 · 鲁宾
      
  |title=Maximum Likelihood from Incomplete Data via the EM Algorithm
 
  |title=Maximum Likelihood from Incomplete Data via the EM Algorithm
   −
|title=Maximum Likelihood from Incomplete Data via the EM Algorithm
+
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
   −
通过 EM 算法从不完全数据中得到的最大似然
+
寻找最大似然解一般需要求解似然函数对所有未知值、参数和潜变量的导数,并同时求解结果方程。在含有潜在变量的统计模型中,这通常是不可能的。相反,其结果通常是一组联锁方程,其中参数的解决需要潜在变量的值,反之亦然,但将一组方程替换成另一组方程,就会产生一个无法解决的方程。
    
  |journal=[[Journal of the Royal Statistical Society, Series B]]
 
  |journal=[[Journal of the Royal Statistical Society, Series B]]
  −
|journal=Journal of the Royal Statistical Society, Series B
  −
  −
英国皇家统计学会期刊 b 辑
      
  |year=1977 |volume=39 |issue=1 |pages=1–38
 
  |year=1977 |volume=39 |issue=1 |pages=1–38
   −
  |year=1977 |volume=39 |issue=1 |pages=1–38
+
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found.  Some likelihoods also have singularities in them, i.e., nonsensical maxima.  For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
 
  −
1977年,第39卷,第1期,第1-38页
     −
|jstor=2984875 |mr= 0501537
+
EM 算法是从观察到有一种数值求解这两组方程的方法出发的。我们可以简单地从两组未知数中选择任意一组,用它们来估计第二组未知数,然后用这些新的值来找到第一组未知数的更好的估计,然后在两组未知数之间交替,直到最终的值都收敛到不动点。这种方法是否有效并不明显,但可以证明在这种情况下是有效的,而且可能性的导数在这一点是(任意接近)零,这反过来意味着这一点要么是最大值,要么是鞍点。一般来说,可能会出现多个最大值,但不能保证找到全局最大值。有些可能还有奇点,即无意义的极大值。例如,在混合模型中,EM 可能找到的解之一涉及到将一个组分设置为方差为零,同一组分的均值参数等于一个数据点。
    
  |jstor=2984875 |mr= 0501537
 
  |jstor=2984875 |mr= 0501537
  −
2984875 | 0501537先生
      
}}
 
}}
   −
}}
+
</ref> They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by [[Cedric Smith (statistician)|Cedric Smith]].<ref>{{cite journal |last1=Ceppelini |first1=R.M. |title=The estimation of gene frequencies in a random-mating population |journal=Ann. Hum. Genet. |volume=20 |issue=2 |pages=97–115 |doi=10.1111/j.1469-1809.1955.tb01360.x|pmid=13268982 |year=1955 |s2cid=38625779 }}</ref>  A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers<ref name="Sundberg1974">{{cite journal
   −
}}
+
Given the statistical model which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or missing values <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a likelihood function <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data
 
  −
</ref> They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by [[Cedric Smith (statistician)|Cedric Smith]].<ref>{{cite journal |last1=Ceppelini |first1=R.M. |title=The estimation of gene frequencies in a random-mating population |journal=Ann. Hum. Genet. |volume=20 |issue=2 |pages=97–115 |doi=10.1111/j.1469-1809.1955.tb01360.x|pmid=13268982 |year=1955 }}</ref> A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers<ref name="Sundberg1974">{{cite journal
  −
 
  −
</ref> They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith.  A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers<ref name="Sundberg1974">{{cite journal
  −
 
  −
他们指出,这种方法是早期作者“在特殊情况下多次提出的”。其中最早的是塞德里克 · 史密斯估计等位基因频率的基因计数法。罗尔夫 · 桑德伯格在他的论文和几篇文章中对指数族的 EM 方法进行了非常详细的论述,文章名为"sundbberg1974"{ cite journal
     −
|last=Sundberg |first=Rolf
+
给定一个统计模型,该模型对观测数据生成一组 < math > mathbf { x } </math > ,一组未观测到的潜在数据或缺失值 < math > mathbf { z } </math > ,一个未知参数的向量 < math > boldsymbol theta </math > ,以及一个似然函数 < math > l (boldsymbol theta; bf { x } ,mathbf { z }) = p (mathbf { x } ,mathbf { z } mid boldtheta) </math > ,未知参数的最大似然估计(MLE)由观测数据的可能性最大化来确定
    
  |last=Sundberg |first=Rolf
 
  |last=Sundberg |first=Rolf
  −
最后一个桑德伯格 | 第一个罗尔夫
      
  |title=Maximum likelihood theory for incomplete data from an exponential family
 
  |title=Maximum likelihood theory for incomplete data from an exponential family
   −
  |title=Maximum likelihood theory for incomplete data from an exponential family
+
<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} </math>
 
  −
来自指数族的不完整数据的最大似然理论
     −
|journal=Scandinavian Journal of Statistics
+
< math > l (黑体字 theta; mathbf { x }) = p (mathbf { x }中黑体字 theta) = int p (mathbf { x } ,mathbf { z }中黑体字 theta) ,d mathbf { z } </math >
    
  |journal=Scandinavian Journal of Statistics
 
  |journal=Scandinavian Journal of Statistics
  −
斯堪的纳维亚统计杂志
      
  |volume=1 |year=1974 |issue=2 |pages=49–58
 
  |volume=1 |year=1974 |issue=2 |pages=49–58
   −
|volume=1 |year=1974 |issue=2 |pages=49–58
+
However, this quantity is often intractable (e.g. if <math>\mathbf{Z}</math> is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult).
 
  −
1974年,第一卷,第二期,第49-58页
     −
|jstor=4615553 |mr= 381110
+
然而,这个数量通常是难以处理的(例如:。如果 < math > mathbf { z } </math > 是一个事件序列,因此值的数目随序列长度呈指数增长,那么精确计算和将极其困难)。
    
  |jstor=4615553 |mr= 381110
 
  |jstor=4615553 |mr= 381110
  −
4615553 | 381110先生
      
}}</ref><ref name="Sundberg1971">
 
}}</ref><ref name="Sundberg1971">
   −
}}</ref><ref name="Sundberg1971">
+
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
   −
} / ref name"sundberg1971"
+
EM 算法通过迭代应用这两个步骤来寻找边际似然的最大似然估计:
    
Rolf Sundberg. 1971. ''Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable''. Dissertation, Institute for Mathematical Statistics, Stockholm University.</ref><ref name="Sundberg1976">{{cite journal
 
Rolf Sundberg. 1971. ''Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable''. Dissertation, Institute for Mathematical Statistics, Stockholm University.</ref><ref name="Sundberg1976">{{cite journal
   −
Rolf Sundberg. 1971. Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable. Dissertation, Institute for Mathematical Statistics, Stockholm University.</ref><ref name="Sundberg1976">{{cite journal
+
Expectation step (E step): Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the expected value of the log likelihood function of <math>\boldsymbol\theta</math>, with respect to the current conditional distribution of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>:
   −
罗尔夫 · 桑德伯格。1971.观察指数族变量函数时产生的分布的极大似然理论及其应用。斯德哥尔摩大学数理统计研究所博士论文,参考名称"sundberg1976"{ cite journal
+
期望步骤(e 步) : 定义 < math > q (boldsymbol theta mid boldsymbol theta ^ {(t)}) </math > 作为 < math > boldsymbol theta </math > 的对数似然函数的期望值,关于当前条件分布 < math > mathbf { z } </math > 给定的 < bf { x } </math > 和当前参数的估计 < math > boldsymbol theta ^ (t) </math > :
    
  |last=Sundberg |first=Rolf
 
  |last=Sundberg |first=Rolf
   −
  |last=Sundberg |first=Rolf
+
<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z}) \right] \,</math>
 
  −
最后一个桑德伯格 | 第一个罗尔夫
     −
|year=1976
+
[ math > q (boldsymbol theta mid boldsymbol theta ^ {(t)}) = operatorname { e } _ { z } mid mathbf { x } ,boldsymbol theta ^ {(t)}}左[ log l (boldsymbol theta; math{ x } ,mathbf { z })右] ,</math >
    
  |year=1976
 
  |year=1976
  −
1976年
      
  |title=An iterative method for solution of the likelihood equations for incomplete data from exponential families
 
  |title=An iterative method for solution of the likelihood equations for incomplete data from exponential families
   −
|title=An iterative method for solution of the likelihood equations for incomplete data from exponential families
+
Maximization step (M step): Find the parameters that maximize this quantity:
   −
| 题目一种迭代法,用于解决指数族不完全数据的似然方程
+
最大化步骤(m 步骤) : 找到最大化这个数量的参数:
    
  |journal=Communications in Statistics – Simulation and Computation
 
  |journal=Communications in Statistics – Simulation and Computation
   −
|journal=Communications in Statistics – Simulation and Computation
+
<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math>
   −
统计学杂志通讯-模拟与计算
+
1) = underset { boldsymbol theta }{ operatorname { arg,max } q (boldsymbol theta mid boldsymbol theta ^ {(t)}) ,</math >
    
  |volume=5 |issue=1 |pages=55–64
 
  |volume=5 |issue=1 |pages=55–64
   −
  |volume=5 |issue=1 |pages=55–64
+
  |doi=10.1080/03610917608812007 |mr=443190
   −
第五卷,第一期,第55-64页
+
The typical models to which EM is applied use <math>\mathbf{Z}</math> as a latent variable indicating membership in one of a set of groups:
   −
|doi=10.1080/03610917608812007 |mr=443190
+
使用 EM 的典型模型使用 < math > mathbf { z } </math > 作为一个潜在变量,指示一组组中某一组的成员:
   −
|doi=10.1080/03610917608812007 |mr=443190
+
}}</ref> following his collaboration with [[Per Martin-Löf]] and [[Anders Martin-Löf]].<ref>See the acknowledgement by Dempster, Laird and Rubin on pages 3, 5 and 11.</ref><ref>G. Kulldorff. 1961.'' Contributions to the theory of estimation from grouped and partially grouped samples''. Almqvist & Wiksell.</ref><ref name="Martin-Löf1963">Anders Martin-Löf. 1963. "Utvärdering av livslängder i subnanosekundsområdet" ("Evaluation of sub-nanosecond lifetimes"). ("Sundberg formula")</ref><ref name="Martin-Löf1966">[[Per Martin-Löf]]. 1966. ''Statistics from the point of view of statistical mechanics''. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula" credited to Anders Martin-Löf).</ref><ref name="Martin-Löf1970">[[Per Martin-Löf]]. 1970. ''Statistika Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Notes from seminars in the academic year 1969-1970), with the assistance of Rolf Sundberg.'' Stockholm University. ("Sundberg formula")</ref><!-- * Martin-Löf, P. "Exact tests, confidence regions and estimates", with a discussion by [[A. W. F. Edwards]], [[George A. Barnard|G. A. Barnard]], D. A. Sprott, O. Barndorff-Nielsen, [[D. Basu]] and [[Rasch model|G. Rasch]]. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 121–138. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974. --><ref name="Martin-Löf1974a">Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, [[Arthur P. Dempster|A. P. Dempster]], [[D. Basu]], [[D. R. Cox]], [[A. W. F. Edwards]], D. A. Sprott, [[George A. Barnard|G. A. Barnard]], O. Barndorff-Nielsen, J. D. Kalbfleisch and [[Rasch model|G. Rasch]] and a reply by the author. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.</ref><ref name="Martin-Löf1974b">{{cite journal |last=Martin-Löf |first=Per |title=The notion of redundancy and its use as a quantitative measure of the discrepancy between a statistical hypothesis and a set of observational data |journal=Scand. J. Statist. |volume=1 |year=1974 |issue=1 |pages=3–18 |doi= }}</ref> The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.
   −
10.1080 / 03610917608812007 | 443190先生
+
The observed data points <math>\mathbf{X}</math> may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
   −
}}</ref> following his collaboration with [[Per Martin-Löf]] and [[Anders Martin-Löf]].<ref>See the acknowledgement by Dempster, Laird and Rubin on pages 3, 5 and 11.</ref><ref>G. Kulldorff. 1961.'' Contributions to the theory of estimation from grouped and partially grouped samples''. Almqvist & Wiksell.</ref><ref name="Martin-Löf1963">Anders Martin-Löf. 1963. "Utvärdering av livslängder i subnanosekundsområdet" ("Evaluation of sub-nanosecond lifetimes"). ("Sundberg formula")</ref><ref name="Martin-Löf1966">[[Per Martin-Löf]]. 1966. ''Statistics from the point of view of statistical mechanics''. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula" credited to Anders Martin-Löf).</ref><ref name="Martin-Löf1970">[[Per Martin-Löf]]. 1970. ''Statistika Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Notes from seminars in the academic year 1969-1970), with the assistance of Rolf Sundberg.'' Stockholm University. ("Sundberg formula")</ref><!-- * Martin-Löf, P. "Exact tests, confidence regions and estimates", with a discussion by [[A. W. F. Edwards]], [[George A. Barnard|G. A. Barnard]], D. A. Sprott, O. Barndorff-Nielsen, [[D. Basu]] and [[Rasch model|G. Rasch]]. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 121–138. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974. --><ref name="Martin-Löf1974a">Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, [[Arthur P. Dempster|A. P. Dempster]], [[D. Basu]], [[D. R. Cox]], [[A. W. F. Edwards]], D. A. Sprott, [[George A. Barnard|G. A. Barnard]], O. Barndorff-Nielsen, J. D. Kalbfleisch and [[Rasch model|G. Rasch]] and a reply by the author. ''Proceedings of Conference on Foundational Questions in Statistical Inference'' (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.</ref><ref name="Martin-Löf1974b">{{cite journal |last=Martin-Löf |first=Per |title=The notion of redundancy and its use as a quantitative measure of the discrepancy between a statistical hypothesis and a set of observational data |journal=Scand. J. Statist. |volume=1 |year=1974 |issue=1 |pages=3–18 |doi= }}</ref> The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. Regardless of earlier inventions, the innovative Dempster–Laird–Rubin paper in the ''Journal of the Royal Statistical Society'' received an enthusiastic discussion at the Royal Statistical Society meeting with Sundberg calling the paper "brilliant". The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.
+
观察到的数据点 < math > mathbf { x } </math > 可以是离散的(在有限或可数集中取值)或连续的(在不可数的无限集中取值)。与每个数据点相关联的可能是一个观测矢量。
   −
}}</ref> following his collaboration with Per Martin-Löf and Anders Martin-Löf.<!-- * Martin-Löf, P. "Exact tests, confidence regions and estimates", with a discussion by A. W. F. Edwards, G. A. Barnard, D. A. Sprott, O. Barndorff-Nielsen, D. Basu and G. Rasch. Proceedings of Conference on Foundational Questions in Statistical Inference (Aarhus, 1973), pp. 121–138. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974. --> The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. Regardless of earlier inventions, the innovative Dempster–Laird–Rubin paper in the Journal of the Royal Statistical Society received an enthusiastic discussion at the Royal Statistical Society meeting with Sundberg calling the paper "brilliant". The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.
     −
} / ref 遵循他与 Per martin-l f 和 Anders martin-l f 的合作。 <! -- * martin-l f,p. “精确测试,置信区域和估计” ,由 a. w. f. Edwards,g. a. Barnard,d. a. Sprott,o. Barndorff-Nielsen,d. Basu 和 g. Rasch 讨论。推论统计学基本问题会议论文集(奥尔胡斯,1973) ,页。121–138.回忆录,没有。1,Dept. .照相机。印第安州 statist. 。Math. 大学。Aarhus, Aarhus, 1974.-- Dempster-Laird-Rubin 在1977年的论文中对该方法进行了推广,并对一类更广泛的问题进行了收敛性分析。在皇家统计学会与桑德伯格的会议上,不管早期的发明如何,颠覆性的 Dempster-Laird-Rubin 论文在《皇家统计学会杂志》上得到了热烈的讨论,他称这篇论文“精彩绝伦”。Dempster-Laird-Rubin 论文建立了 EM 方法作为统计分析的重要工具。
      +
The missing values (aka latent variables) <math>\mathbf{Z}</math> are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.
    +
缺失值(又名潜在变量) < math > mathbf { z } </math > 是离散的,从固定数量的值中提取,每个观察单位有一个潜在变量。
    
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by [[C. F. Jeff Wu]] in 1983.<ref name="Wu">
 
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by [[C. F. Jeff Wu]] in 1983.<ref name="Wu">
   −
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983.<ref name="Wu">
+
The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).
   −
Dempster-Laird-Rubin 算法的收敛性分析是有缺陷的,c. f. Jeff Wu 于1983年发表了正确的收敛性分析。 无奈之下
+
参数是连续的,有两种类型: 一种是与所有数据点相关联的参数,另一种是与潜在变量的特定值相关联的参数(例如,与相应潜在变量具有该值的所有数据点相关联的参数)。
    
{{cite journal
 
{{cite journal
   −
{{cite journal
+
However, it is possible to apply EM to other sorts of models.
   −
{引用期刊
+
然而,将 EM 应用于其他类型的模型是可能的。
    
|first=C. F. Jeff
 
|first=C. F. Jeff
  −
|first=C. F. Jeff
  −
  −
第一个 c。F · 杰夫
      
|last=Wu
 
|last=Wu
   −
|last=Wu
+
The motive is as follows.  If the value of the parameters <math>\boldsymbol\theta</math> is known, usually the value of the latent variables <math>\mathbf{Z}</math> can be found by maximizing the log-likelihood over all possible values of <math>\mathbf{Z}</math>, either simply by iterating over <math>\mathbf{Z}</math> or through an algorithm such as the Baum–Welch algorithm for hidden Markov models.  Conversely, if we know the value of the latent variables <math>\mathbf{Z}</math>, we can find an estimate of the parameters <math>\boldsymbol\theta</math> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group.  This suggests an iterative algorithm, in the case where both <math>\boldsymbol\theta</math> and <math>\mathbf{Z}</math> are unknown:
   −
最后一个吴
+
动机如下。如果参数 < math > boldsymbol theta </math > 的值已知,通常潜变量 < math > mathbf { z } </math > 的值可以通过最大化对所有可能的 < math > mathbf { z } </math > 值的对数似然性来找到,或者通过迭代 < math > bf { z } </math > ,或者通过一个算法,如用于隐藏马尔可夫模型的 Baum-Welch 算法。相反,如果我们知道潜变量的值,我们可以很容易地找到参数的估计值 < math > 黑体符号 theta </math > ,通常只需根据相关潜变量的值对观察到的数据点进行分组,然后对每组中的点的值或值的某个函数取平均值。这表明了一种迭代算法,在 < math > 粗体字 theta </math > 和 < math > mathbf { z } </math > 未知的情况下:
    
|title=On the Convergence Properties of the EM Algorithm
 
|title=On the Convergence Properties of the EM Algorithm
   −
|title=On the Convergence Properties of the EM Algorithm
+
First, initialize the parameters <math>\boldsymbol\theta</math> to some random values.
   −
关于 EM 算法的收敛性
+
首先,将参数“ math” > boldsymbol theta </math > 初始化为一些随机值。
    
|journal=[[Annals of Statistics]]
 
|journal=[[Annals of Statistics]]
   −
|journal=Annals of Statistics
+
Compute the probability of each possible value of <math>\mathbf{Z}</math> , given <math>\boldsymbol\theta</math>.
   −
统计年鉴
+
计算 < math > mathbf { z } </math > ,给定 < math > 粗体字 theta </math > 的每个可能值的概率。
    
|volume=11
 
|volume=11
   −
|volume=11
+
Then, use the just-computed values of <math>\mathbf{Z}</math> to compute a better estimate for the parameters <math>\boldsymbol\theta</math>.
   −
第11卷
+
然后,使用刚刚计算的 < math > mathbf { z } </math > 来计算参数 < math > boldsymbol theta </math > 的更好的估计值。
    
|issue=1
 
|issue=1
   −
|issue=1
+
Iterate steps 2 and 3 until convergence.
   −
第一期
+
迭代步骤2和3直到收敛。
    
|date=Mar 1983
 
|date=Mar 1983
   −
|date=Mar 1983
+
The algorithm as just described monotonically approaches a local minimum of the cost function.
 
  −
日期: 1983年3月
     −
|pages=95–103
+
上述算法单调地接近代价函数的局部极小值。
    
|pages=95–103
 
|pages=95–103
  −
第95-103页
  −
  −
|jstor=2240463
      
|jstor=2240463
 
|jstor=2240463
  −
2240463
      
|doi=10.1214/aos/1176346060
 
|doi=10.1214/aos/1176346060
   −
|doi=10.1214/aos/1176346060
+
Speaking of an expectation (E) step is a bit of a misnomer. What are calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
   −
|doi=10.1214/aos/1176346060
+
说到期望(e)步骤有点用词不当。在第一步计算的是函数 q 固定的、与数据相关的参数。一旦 q 的参数已知,它是完全确定和最大化在第二(m)步的 EM 算法。
    
|mr= 684867
 
|mr= 684867
  −
|mr= 684867
  −
  −
684867先生
      
|doi-access=free
 
|doi-access=free
   −
|doi-access=free
+
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates θ<sup>(t)</sup>), or applying simulated annealing methods.
   −
免费访问
+
虽然 EM 迭代确实增加了观测数据(即边际)的似然函数,但不能保证序列收敛于最大似然估计量。对于多模态分布,这意味着 EM 算法可能收敛到观测数据似然函数的局部极大值,这取决于起始值。各种启发式或亚启发式方法存在逃避局部最大值,如随机重启爬山(开始几个不同的随机初始估计 θ < sup > (t) </sup >) ,或应用模拟退火方法。
    
}}</ref>
 
}}</ref>
   −
}}</ref>
+
Wu's proof established the EM method's convergence outside of the [[exponential family]], as claimed by Dempster–Laird–Rubin.<ref name="Wu" />
   −
{} / ref
+
EM is especially useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).
   −
Wu's proof established the EM method's convergence outside of the [[exponential family]], as claimed by Dempster–Laird–Rubin.<ref name="Wu" />
+
当可能性是指数族时,EM 特别有用: e 步成为充分统计量的期望和,m 步涉及最大化线性函数。在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发表,使用 Per Martin-Löf 和 Anders Martin-Löf 未发表的结果)推导出每个步骤的解析解更新。
   −
Wu's proof established the EM method's convergence outside of the exponential family, as claimed by Dempster–Laird–Rubin.
     −
吴的证明建立了 EM 方法在指数族之外的收敛性,Dempster-Laird-Rubin 声称。
      +
==Introduction==
    +
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
   −
==Introduction==
+
在 Dempster、 Laird 和 Rubin 的原始论文中,EM 方法被修改为计算贝叶斯推断的最大后验概率估计。
    
The EM algorithm is used to find (local) [[maximum likelihood]] parameters of a [[statistical model]] in cases where the equations cannot be solved directly.  Typically these models involve [[latent variable]]s in addition to unknown [[parameters]] and known data observations.  That is, either [[missing values]] exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a [[mixture model]] can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
 
The EM algorithm is used to find (local) [[maximum likelihood]] parameters of a [[statistical model]] in cases where the equations cannot be solved directly.  Typically these models involve [[latent variable]]s in addition to unknown [[parameters]] and known data observations.  That is, either [[missing values]] exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a [[mixture model]] can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
   −
The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly.  Typically these models involve latent variables in addition to unknown parameters and known data observations.  That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
     −
Em 算法用于在方程不能直接求解的情况下寻找统计模型的(局部)最大似然参数。除了未知参数和已知数据观测值之外,这些模型通常还涉及潜变量。也就是说,要么数据中存在缺失值,要么通过假设存在更多未观测到的数据点,可以更简单地建立模型。例如,通过假设每个观测数据点都有一个对应的未观测数据点或潜在变量,并指定每个数据点所属的混合成分,可以更简单地描述混合模型。
      +
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
    +
还有其他方法可以找到最大似然估计,比如梯度下降法、共轭梯度或 Gauss-Newton 算法的变体。与 EM 不同,这类方法通常需要对似然函数的一阶和/或二阶导数求值。
    
Finding a maximum likelihood solution typically requires taking the [[derivative]]s of the [[likelihood function]] with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
 
Finding a maximum likelihood solution typically requires taking the [[derivative]]s of the [[likelihood function]] with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
   −
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
     −
寻找最大似然解一般需要求解似然函数对所有未知值、参数和潜变量的导数,并同时求解结果方程。在含有潜在变量的统计模型中,这通常是不可能的。相反,其结果通常是一组联锁方程,其中参数的解决需要潜在变量的值,反之亦然,但将一组方程替换成另一组方程会产生一个无法解决的方程。
      +
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points.  It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a [[saddle point]].<ref name="Wu" /> In general, multiple maxima may occur, with no guarantee that the global maximum will be found.  Some likelihoods also have [[Mathematical singularity|singularities]] in them, i.e., nonsensical maxima.  For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
    +
Expectation-maximization works to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> rather than directly improving <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math>.  Here is shown that improvements to the former imply improvements to the latter. GEM is further developed in a distributed environment and shows promising results.
   −
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points.  It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a [[saddle point]].<ref name="Wu" /> In general, multiple maxima may occur, with no guarantee that the global maximum will be found.  Some likelihoods also have [[Mathematical singularity|singularities]] in them, i.e., nonsensical maxima.  For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
+
期望-最大化工程改善 < math > q (黑体字 theta 中黑体字 theta ^ {(t)}) </math > 而不是直接改善 < math > log p (mathbf { x }中黑体字 theta) </math > 。这里显示前者的改进意味着后者的改进。在分布式环境下,GEM 得到了进一步的发展,取得了良好的效果。
   −
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points.  It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found.  Some likelihoods also have singularities in them, i.e., nonsensical maxima.  For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
     −
Em 算法是从观察到有一种数值求解这两组方程的方法出发的。人们可以简单地从两组未知数中选择任意一组,用它们来估计第二组未知数,然后用这些新的值来找到第一组未知数的更好的估计,然后在两组未知数之间交替,直到最终的值都收敛到不动点。这种方法是否有效并不明显,但可以证明在这种情况下是有效的,并且可能性的导数在这一点是(任意接近)零,这反过来意味着这一点要么是一个最大值,要么是一个鞍点。一般情况下,可能会出现多个最大值,但不能保证找到全局最大值。有些可能还有奇点,即无意义的极大值。例如,在混合模型中,EM 可能找到的解之一涉及到将一个组分设置为方差为零,同一组分的均值参数等于一个数据点。
      +
<math>X_i \mid(Z_i = 1) \sim \mathcal{N}_d(\boldsymbol{\mu}_1,\Sigma_1)</math> and <math>X_i \mid(Z_i = 2) \sim \mathcal{N}_d(\boldsymbol{\mu}_2,\Sigma_2),</math>
    +
[数学] × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × ×
    
==Description==
 
==Description==
 +
 +
where
 +
 +
在哪里
    
Given the [[statistical model]] which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or [[missing values]] <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a [[likelihood function]] <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the [[maximum likelihood estimate]] (MLE) of the unknown parameters is determined by maximizing the [[marginal likelihood]] of the observed data
 
Given the [[statistical model]] which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or [[missing values]] <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a [[likelihood function]] <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the [[maximum likelihood estimate]] (MLE) of the unknown parameters is determined by maximizing the [[marginal likelihood]] of the observed data
   −
Given the statistical model which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or missing values <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a likelihood function <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data
+
<math>\operatorname{P} (Z_i = 1 ) = \tau_1 \, </math> and <math>\operatorname{P} (Z_i=2) = \tau_2 = 1-\tau_1.</math>
   −
给出了产生观测数据集合数学的统计模型、一组未观测到的潜在数据或缺失值数学、一个未知参数数学符号 θ / math 的向量,以及一个似然函数数学 l (x }、数学符号 θ) p (数学符号 x }、数学符号 z }) / math,通过最大化观测数据的边际似然度确定未知参数的最大似然估计(MLE)
+
(z _ i = 1) = tau _ 1,</math > 和 < math > 操作员名{ p }(z _ i = 2) = tau _ 2 = 1-tau _ 1. </math >
      第313行: 第261行:  
:<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int  p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} </math>
 
:<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int  p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} </math>
   −
<math>L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int  p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} </math>
+
The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:
 +
 
 +
其目的是估计代表高斯函数与每个函数的均值和协方差之间的混合值的未知参数:
 +
 
   −
数学 l ( boldsymbol  theta;  mathbf { x }) p ( mathbf { x } mid  boldsymbol  theta) int p ( mathbf { x } , mathbf { z } mid  boldsymbol  theta) mathz } / math
      +
<math>\theta = \big( \boldsymbol{\tau},\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\Sigma_1,\Sigma_2 \big),</math>
    +
1,黑体符号{ mu }2,Sigma _ 1,Sigma _ 2 big) ,</math >
    
However, this quantity is often intractable (e.g. if <math>\mathbf{Z}</math> is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult).
 
However, this quantity is often intractable (e.g. if <math>\mathbf{Z}</math> is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult).
   −
However, this quantity is often intractable (e.g. if <math>\mathbf{Z}</math> is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult).
+
where the incomplete-data likelihood function is
 +
 
 +
不完全数据似然函数在哪里
   −
然而,这个数量通常是难以处理的(例如:。如果 math  mathbf { z } / math 是一个事件序列,所以值的数目随序列长度成指数增长,那么精确计算和将极其困难)。
         +
<math>L(\theta;\mathbf{x}) =  \prod_{i=1}^n \sum_{j=1}^2 \tau_j \ f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j),</math>
   −
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
+
(mathbf { x }) = prod _ { i = 1} ^ n sum _ { j = 1} ^ 2 tau _ j f (mathbf { x } _ i; 粗体符{ mu } _ j,Sigma _ j) ,</math >
    
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
 
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
  −
Em 算法通过迭代应用这两个步骤寻找边际似然的最大似然估计:
      
:''Expectation step (E step)'': Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the [[expected value]] of the log [[likelihood function]] of <math>\boldsymbol\theta</math>, with respect to the current [[conditional probability distribution|conditional distribution]] of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>:
 
:''Expectation step (E step)'': Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the [[expected value]] of the log [[likelihood function]] of <math>\boldsymbol\theta</math>, with respect to the current [[conditional probability distribution|conditional distribution]] of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>:
   −
Expectation step (E step): Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the expected value of the log likelihood function of <math>\boldsymbol\theta</math>, with respect to the current conditional distribution of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>:
+
and the complete-data likelihood function is
   −
期望步骤(e 步) : 将 math q ( boldsymbol  theta  mid  boldsymbol  theta ^ {(t)}) / math 定义为 math  boldsymbol  theta / math 的对数似然函数的期望值,关于当前给定的 mathbf { z } / math 的条件分布和当前对参数的估计:
+
完全数据似然函数是
    
::<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z})  \right] \,</math>
 
::<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z})  \right] \,</math>
   −
<math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z}) \right] \,</math>
+
<math>L(\theta;\mathbf{x},\mathbf{z}) = p(\mathbf{x},\mathbf{z} \mid \theta) = \prod_{i=1}^n  \prod_{j=1}^2  \ [f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) \tau_j] ^{\mathbb{I}(z_i=j)},</math>
   −
数学 q ( boldsymbol  theta mid boldsymbol  theta ^ (t)})运算符名称{ e } mid  mathbf { x } boldsymbol theta ^ ( boldsymbol  theta;  mathx } , z }右] ,/ math
+
< math > l (theta; mathbf { x } ,mathbf { z }) = p (mathbf { x } ,mathbf { z } mid theta) = prod _ { i = 1} ^ n prod _ { j = 1} ^ 2[ f (mathbf { x } _ i; boldsymbol { mu } _ j,Sigma _ j) tau _ j ] ^ { math{ i }(z _ i = j)} ,</math >
      第349行: 第301行:  
:''Maximization step (M step)'': Find the parameters that maximize this quantity:
 
:''Maximization step (M step)'': Find the parameters that maximize this quantity:
   −
Maximization step (M step): Find the parameters that maximize this quantity:
+
or
   −
最大化步骤(m 步骤) : 找到最大化这个数量的参数:
+
    
::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math>
 
::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math>
   −
<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math>
  −
  −
数学符号 theta ^ (t + 1)}下集{ boldsymbol  theta }运算符名称{ arg,max } q ( boldsymbol  theta mid  boldsymbol  theta ^ (t)}) ,/ math
         +
<math>L(\theta;\mathbf{x},\mathbf{z}) = \exp \left\{ \sum_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] \right\},</math>
   −
The typical models to which EM is applied use <math>\mathbf{Z}</math> as a latent variable indicating membership in one of a set of groups:
+
[ math > l (theta; mathbf { x } ,mathbf { z }) = exp 左{ sum { i = 1} n sum { j = 1} ^ 2 mathbb { i }(z _ i = j) big [ log tau _ j-tfrac {1}{2} log | Sigma _ j | tfrac {1}{2}(x } i-Sigma _ j } _ j) ^ top _ j ^-1}(mathbf { x } i-boldmu {符号}{ j)-frac {2 log {2}(2))大数学右,/>  
    
The typical models to which EM is applied use <math>\mathbf{Z}</math> as a latent variable indicating membership in one of a set of groups:
 
The typical models to which EM is applied use <math>\mathbf{Z}</math> as a latent variable indicating membership in one of a set of groups:
  −
使用 EM 的典型模型使用 math  mathbf { z } / math 作为一个潜在变量,表示一组组中某一组的成员:
      
#The observed data points <math>\mathbf{X}</math> may be [[discrete random variable|discrete]] (taking values in a finite or countably infinite set) or [[continuous random variable|continuous]] (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
 
#The observed data points <math>\mathbf{X}</math> may be [[discrete random variable|discrete]] (taking values in a finite or countably infinite set) or [[continuous random variable|continuous]] (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
   −
The observed data points <math>\mathbf{X}</math> may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
+
where <math>\mathbb{I}</math> is an indicator function and <math>f</math> is the probability density function of a multivariate normal.
   −
观察到的数据点 math  mathbf { x } / math 可以是离散的(在有限或可数集中取值)或连续的(在不可数的无限集中取值)。与每个数据点相关联的可能是一个观测矢量。
+
其中 mathbb { i } </math > 是指示函数,而 f </math > 是多元正态分布的概率密度函数。
    
#The [[missing values]] (aka [[latent variables]]) <math>\mathbf{Z}</math> are [[discrete random variable|discrete]], drawn from a fixed number of values, and with one latent variable per observed unit.  
 
#The [[missing values]] (aka [[latent variables]]) <math>\mathbf{Z}</math> are [[discrete random variable|discrete]], drawn from a fixed number of values, and with one latent variable per observed unit.  
  −
The missing values (aka latent variables) <math>\mathbf{Z}</math> are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.
  −
  −
缺失值(又称潜在变量) math  mathbf { z } / math 是离散的,从固定数量的值中提取,每个观察单位有一个潜在变量。
      
#The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).
 
#The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).
   −
The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).
+
In the last equality, for each , one indicator <math>\mathbb{I}(z_i=j)</math> is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
   −
这些参数是连续的,有两种类型: 一种是与所有数据点相关联的参数,另一种是与潜在变量的特定值相关联的参数(即与相应潜在变量具有该值的所有数据点相关联的参数)
+
在最后一个等式中,对于每个等式,一个指示符 < math > mathbb { i }(z _ i = j) </math > 等于零,一个指示符等于1。因此,内和归结为一项。
    
However, it is possible to apply EM to other sorts of models.
 
However, it is possible to apply EM to other sorts of models.
  −
However, it is possible to apply EM to other sorts of models.
  −
  −
然而,将 EM 应用于其他类型的模型是可能的。
            
The motive is as follows.  If the value of the parameters <math>\boldsymbol\theta</math> is known, usually the value of the latent variables <math>\mathbf{Z}</math> can be found by maximizing the log-likelihood over all possible values of <math>\mathbf{Z}</math>, either simply by iterating over <math>\mathbf{Z}</math> or through an algorithm such as the [[Baum–Welch algorithm]] for [[hidden Markov model]]s.  Conversely, if we know the value of the latent variables <math>\mathbf{Z}</math>, we can find an estimate of the parameters <math>\boldsymbol\theta</math> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group.  This suggests an iterative algorithm, in the case where both <math>\boldsymbol\theta</math> and <math>\mathbf{Z}</math> are unknown:
 
The motive is as follows.  If the value of the parameters <math>\boldsymbol\theta</math> is known, usually the value of the latent variables <math>\mathbf{Z}</math> can be found by maximizing the log-likelihood over all possible values of <math>\mathbf{Z}</math>, either simply by iterating over <math>\mathbf{Z}</math> or through an algorithm such as the [[Baum–Welch algorithm]] for [[hidden Markov model]]s.  Conversely, if we know the value of the latent variables <math>\mathbf{Z}</math>, we can find an estimate of the parameters <math>\boldsymbol\theta</math> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group.  This suggests an iterative algorithm, in the case where both <math>\boldsymbol\theta</math> and <math>\mathbf{Z}</math> are unknown:
  −
The motive is as follows.  If the value of the parameters <math>\boldsymbol\theta</math> is known, usually the value of the latent variables <math>\mathbf{Z}</math> can be found by maximizing the log-likelihood over all possible values of <math>\mathbf{Z}</math>, either simply by iterating over <math>\mathbf{Z}</math> or through an algorithm such as the Baum–Welch algorithm for hidden Markov models.  Conversely, if we know the value of the latent variables <math>\mathbf{Z}</math>, we can find an estimate of the parameters <math>\boldsymbol\theta</math> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group.  This suggests an iterative algorithm, in the case where both <math>\boldsymbol\theta</math> and <math>\mathbf{Z}</math> are unknown:
  −
  −
动机如下。如果参数 math  boldsymbol  theta / math 的值是已知的,通常可以通过对所有可能的 math  mathbf { z } / math 的值最大化对数似然来找到潜变量 math  mathbf { z } / math 的值,可以简单地通过迭代 math  z } / math 或通过隐马尔可夫模型的算法如 Welch 算法。相反,如果我们知道潜变量 math  z } / math 的值,我们可以相当容易地找到参数估计的 math  boldsymbol  theta / math,通常只需根据相关潜变量的值对观察到的数据点进行分组,并对每组中的点的值或值的某些函数取平均值。这就提出了一种迭代算法,在这种情况下,math  boldsymbol  theta / math 和 math  mathbf { z } / math 都是未知的:
      
#First, initialize the parameters <math>\boldsymbol\theta</math> to some random values.
 
#First, initialize the parameters <math>\boldsymbol\theta</math> to some random values.
   −
First, initialize the parameters <math>\boldsymbol\theta</math> to some random values.
+
Given our current estimate of the parameters θ<sup>(t)</sup>, the conditional distribution of the Z<sub>i</sub> is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:
   −
首先,将参数 math  boldsymbol  theta / math 初始化为一些随机值。
+
给出了参数 θ < sup > (t) </sup > 的当前估计,z < sub > i </sub > 的条件分布用 Bayes 定理确定为正态密度的比例高度 τ:
    
#Compute the probability of each possible value of <math>\mathbf{Z}</math> , given <math>\boldsymbol\theta</math>.
 
#Compute the probability of each possible value of <math>\mathbf{Z}</math> , given <math>\boldsymbol\theta</math>.
   −
Compute the probability of each possible value of <math>\mathbf{Z}</math> , given <math>\boldsymbol\theta</math>.
+
<math>T_{j,i}^{(t)} := \operatorname{P}(Z_i=j \mid X_i=\mathbf{x}_i ;\theta^{(t)}) = \frac{\tau_j^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_j^{(t)},\Sigma_j^{(t)})}{\tau_1^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_1^{(t)},\Sigma_1^{(t)}) + \tau_2^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_2^{(t)},\Sigma_2^{(t)})}.</math>
   −
计算给定数学黑体符号 theta / math 的每个可能值的概率。
+
句子太长,请提供一个短句
    
#Then, use the just-computed values of <math>\mathbf{Z}</math> to compute a better estimate for the parameters <math>\boldsymbol\theta</math>.
 
#Then, use the just-computed values of <math>\mathbf{Z}</math> to compute a better estimate for the parameters <math>\boldsymbol\theta</math>.
  −
Then, use the just-computed values of <math>\mathbf{Z}</math> to compute a better estimate for the parameters <math>\boldsymbol\theta</math>.
  −
  −
然后,使用 math  z } / math 的刚计算值计算出参数 math  boldsymbol  theta / math 的更好估计。
      
#Iterate steps 2 and 3 until convergence.
 
#Iterate steps 2 and 3 until convergence.
   −
Iterate steps 2 and 3 until convergence.
+
These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below).
   −
迭代步骤2和3直到收敛。
+
这些被称为“成员概率” ,通常被认为是 e 步的输出(尽管这不是下面的 q 函数)。
    
The algorithm as just described monotonically approaches a local minimum of the cost function.
 
The algorithm as just described monotonically approaches a local minimum of the cost function.
   −
The algorithm as just described monotonically approaches a local minimum of the cost function.
     −
刚才描述的算法单调地接近代价函数的局部极小值。
      +
This E step corresponds with setting up this function for Q:
    +
这个 e 步对应于为 q 设置这个函数:
    
== Properties ==
 
== Properties ==
 +
 +
<math>\begin{align}Q(\theta\mid\theta^{(t)})
 +
 +
< math > begin { align } q (theta mid theta ^ {(t)})
    
Speaking of an expectation (E) step is a bit of a [[misnomer]]. What are calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
 
Speaking of an expectation (E) step is a bit of a [[misnomer]]. What are calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
   −
Speaking of an expectation (E) step is a bit of a misnomer. What are calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
+
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x},\mathbf{Z}) ] \\
 +
 
 +
和 = operatorname { e } _ { mathbf { z } mid mathbf { x } ,mathbf { theta } ^ {(t)}}[ log l (theta; mathbf { x } ,mathbf { z }]
 +
 
   −
说到期望(e)步骤有点用词不当。在第一步计算的是函数 q 固定的、与数据相关的参数。一旦 q 的参数已知,它是完全确定的,并在第二(m)步的 EM 算法的最大化。
      +
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log \prod_{i=1}^{n}L(\theta;\mathbf{x}_i,Z_i) ] \\
    +
[日志 prod _ { i = 1}{ n } l (theta; mathbf { x } _ i,z _ i)]
    
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a [[maximum likelihood estimator]]. For [[bimodal distribution|multimodal distributions]], this means that an EM algorithm may converge to a [[local maximum]] of the observed data likelihood function, depending on starting values. A variety of heuristic or [[metaheuristic]] approaches exist to escape a local maximum, such as random-restart [[hill climbing]] (starting with several different random initial estimates ''θ''<sup>(''t'')</sup>), or applying [[simulated annealing]] methods.
 
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a [[maximum likelihood estimator]]. For [[bimodal distribution|multimodal distributions]], this means that an EM algorithm may converge to a [[local maximum]] of the observed data likelihood function, depending on starting values. A variety of heuristic or [[metaheuristic]] approaches exist to escape a local maximum, such as random-restart [[hill climbing]] (starting with several different random initial estimates ''θ''<sup>(''t'')</sup>), or applying [[simulated annealing]] methods.
   −
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates θ<sup>(t)</sup>), or applying simulated annealing methods.
+
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\sum_{i=1}^n \log L(\theta;\mathbf{x}_i,Z_i) ] \\
 +
 
 +
和 = operatorname { e } _ { mathbf { z } mid mathbf { x } ,mathbf { theta } ^ {(t)}}[ sum _ { i = 1} ^ n log l (theta; mathbf { x } _ i,z _ i)]
 +
 
   −
虽然 EM 迭代确实增加了观测数据(即边际)的似然函数,但不能保证序列收敛于最大似然估计。对于多模态分布,这意味着 EM 算法可能收敛到观测数据似然函数的局部极大值,这取决于起始值。各种启发式或超启发式方法存在逃避局部最大值,如随机重启爬山(从几个不同的随机初始估计 sup (t) / sup 开始) ,或应用模拟退火方法。
      +
&= \sum_{i=1}^n\operatorname{E}_{Z_i\mid\mathbf{X};\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x}_i,Z_i) ] \\
    +
{ e }{ z _ i mid mathbf { x } ; mathbf { theta } ^ {(t)}[ log l (theta; mathbf { x } _ i,z _ i)]
    
EM is especially useful when the likelihood is an [[exponential family]]: the E step becomes the sum of expectations of [[sufficient statistic]]s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive [[closed-form expression]] updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of [[Per Martin-Löf]] and [[Anders Martin-Löf]]).<ref name="Sundberg1971"/><ref name="Sundberg1976"/><ref name="Martin-Löf1963"/><ref name="Martin-Löf1966"/><ref name="Martin-Löf1970"/><ref name="Martin-Löf1974a"/><ref name="Martin-Löf1974b"/>
 
EM is especially useful when the likelihood is an [[exponential family]]: the E step becomes the sum of expectations of [[sufficient statistic]]s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive [[closed-form expression]] updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of [[Per Martin-Löf]] and [[Anders Martin-Löf]]).<ref name="Sundberg1971"/><ref name="Sundberg1976"/><ref name="Martin-Löf1963"/><ref name="Martin-Löf1966"/><ref name="Martin-Löf1970"/><ref name="Martin-Löf1974a"/><ref name="Martin-Löf1974b"/>
   −
EM is especially useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).
+
&= \sum_{i=1}^n \sum_{j=1}^2 P(Z_i =j \mid X_i = \mathbf{x}_i; \theta^{(t)}) \log L(\theta_j;\mathbf{x}_i,j) \\
 +
 
 +
和 = sum { i = 1} n sum { j = 1} ^ 2 p (z _ i = j mid xi = mathbf { x } _ i; theta ^ {(t)}) log l (theta _ j; mathbf { x } _ i,j)
 +
 
   −
当可能性是指数族时,EM 特别有用: e 步成为充分统计量的期望和,m 步涉及最大化线性函数。在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发表,使用 Per martin-l f 和 Anders martin-l f 的未发表结果)推导出每个步骤的解析解更新。
      +
&= \sum_{i=1}^n \sum_{j=1}^2 T_{j,i}^{(t)} \big[ \log \tau_j  -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big].
    +
和 = sum { i = 1} n sum { j = 1} ^ 2 t _ { j,i }{(t)} big [ log tau _ j-tfrac {1}{2} log | Sigma _ j |-tfrac {1}{2}(mathbf { x } _ i-boldsymbol { mu } _ j) ^ top _ Sigma ^ {1}(mathbf { x } _ i-boldsymbol {}{} _ j)-frac {2}(2 pi) big log ]。
    
The EM method was modified to compute [[maximum a posteriori]] (MAP) estimates for [[Bayesian inference]] in the original paper by Dempster, Laird, and Rubin.
 
The EM method was modified to compute [[maximum a posteriori]] (MAP) estimates for [[Bayesian inference]] in the original paper by Dempster, Laird, and Rubin.
   −
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
+
\end{align}</math>
 +
 
 +
结束{ align } </math >
 +
 
   −
在 Dempster、 Laird 和 Rubin 的原始论文中,EM 方法被修改为计算最大后验概率估计贝叶斯推断。
      +
The expectation of <math>\log L(\theta;\mathbf{x}_i,Z_i)</math> inside the sum is taken with respect to the probability density function <math>P(Z_i \mid X_i = \mathbf{x}_i; \theta^{(t)})</math>, which might be different for each  <math>\mathbf{x}_i</math> of the training set. Everything in the E step is known before the step is taken except <math>T_{j,i}</math>, which is computed according to the equation at the beginning of the E step section.
    +
在求和内部,对于概率密度函数的期望值取相对于训练集的数学公式 p (z _ i mid x _ i = mathbf { x } _ i; theta ^ {(t)}) </math > ,这对于每个 < math > bf { x } _ i </math > 的数学公式可能是不同的。在执行 e 步之前,e 步中的所有内容都是已知的,只有 < math > t _ { j,i } </math > 除外,它是根据 e 步部分开头的方程计算出来的。
    
Other methods exist to find maximum likelihood estimates, such as [[gradient descent]], [[conjugate gradient]], or variants of the [[Gauss–Newton algorithm]]. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
 
Other methods exist to find maximum likelihood estimates, such as [[gradient descent]], [[conjugate gradient]], or variants of the [[Gauss–Newton algorithm]]. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
   −
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
     −
还有一些其他的方法可以找到最大似然估计,比如梯度下降法,共轭梯度,或者 Gauss-Newton 算法的变体。与 EM 不同,这类方法通常需要对似然函数的一阶和 / 或二阶导数求值。
      +
This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.
    +
这个完整的条件期望不需要一步计算,因为 τ 和 μ/σ 是分开的线性项,因此可以独立地最大化。
    
== Proof of correctness ==
 
== Proof of correctness ==
   −
Expectation-maximization works to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> rather than directly improving <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math>.  Here is shown that improvements to the former imply improvements to the latter.<ref name="Little1987">{{cite book |last1=Little |first1= Roderick J.A. |author1-link= |last2= Rubin |first2= Donald B. |author2-link= Donald Rubin |title= Statistical Analysis with Missing Data | series = Wiley Series in Probability and Mathematical Statistics |year= 1987 |publisher= John Wiley & Sons |location= New York |isbn= 978-0-471-80254-9 |pages= 134–136}}</ref>
+
Expectation-maximization works to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> rather than directly improving <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math>.  Here is shown that improvements to the former imply improvements to the latter.<ref name="Little1987">{{cite book |last1=Little |first1= Roderick J.A. |author1-link= |last2= Rubin |first2= Donald B. |author2-link= Donald Rubin |title= Statistical Analysis with Missing Data |url=https://archive.org/details/statisticalanaly00litt |url-access=limited | series = Wiley Series in Probability and Mathematical Statistics |year= 1987 |publisher= John Wiley & Sons |location= New York |isbn= 978-0-471-80254-9 |pages= [https://archive.org/details/statisticalanaly00litt/page/n145 134]–136}}</ref>
   −
Expectation-maximization works to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> rather than directly improving <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math>.  Here is shown that improvements to the former imply improvements to the latter.
     −
期望-最大化可以改进数学 q ( boldsymbol  theta  mid  theta ^ (t)) / math,而不是直接改进 math  log p ( mathbf  mid  boldsymbol  theta) / math。这里显示对前者的改进意味着对后者的改进。
      +
Q(θ&nbsp;|&nbsp;θ<sup>(t)</sup>) being quadratic in form means that determining the maximizing values of θ is relatively straightforward. Also, τ, (μ<sub>1</sub>,Σ<sub>1</sub>) and (μ<sub>2</sub>,Σ<sub>2</sub>) may all be maximized independently since they all appear in separate linear terms.
    +
Q (θ | θ < sup > (t) </sup >)为二次型,因此确定 θ 的最大值相对比较简单。τ,(μ < sub > 1 </sub > ,σ < sub > 1 </sub >)和(μ < sub > 2 </sub > ,σ < sub > 2 </sub >)都可以独立地最大化,因为它们都是以独立的线性项出现的。
    
For any <math>\mathbf{Z}</math> with non-zero probability <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta)</math>, we can write
 
For any <math>\mathbf{Z}</math> with non-zero probability <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta)</math>, we can write
   −
For any <math>\mathbf{Z}</math> with non-zero probability <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta)</math>, we can write
+
: <math>
   −
对于任何带有非零概率数学 p ( mathbf { z } mid  mathbf { x } , boldsymbol  theta) / math,我们可以编写
+
To begin, consider τ, which has the constraint τ<sub>1</sub> + τ<sub>2</sub>=1:
   −
::<math>
+
首先,考虑 τ,其约束 τ < sub > 1 </sub > + τ < sub > 2 </sub > = 1:
   −
<math>
+
\log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta).
   −
数学
+
<math>\begin{align}\boldsymbol{\tau}^{(t+1)}
   −
\log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \,.
+
< math > begin { align } boldsymbol { tau } ^ {(t + 1)}
 
  −
\log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \,.
  −
 
  −
1. log p ( mathbf { x } mid  boldsymbol  theta) log p ( mathbf { x } , mathbf { z } mid  boldsymbol  theta)- log p ( mathbf { z } mid  mathbf { x } , boldsymbol  theta) ,.
      
</math>
 
</math>
   −
</math>
+
&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}}\  Q(\theta \mid \theta^{(t)} ) \\
   −
数学
+
下集{ boldsymbol { tau }{ operatorname { arg,max } q (theta mid theta ^ {(t)})
    
We take the expectation over possible values of the unknown data <math>\mathbf{Z}</math> under the current parameter estimate <math>\theta^{(t)}</math> by multiplying both sides by <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)})</math> and summing (or integrating) over <math>\mathbf{Z}</math>.  The left-hand side is the expectation of a constant, so we get:
 
We take the expectation over possible values of the unknown data <math>\mathbf{Z}</math> under the current parameter estimate <math>\theta^{(t)}</math> by multiplying both sides by <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)})</math> and summing (or integrating) over <math>\mathbf{Z}</math>.  The left-hand side is the expectation of a constant, so we get:
   −
We take the expectation over possible values of the unknown data <math>\mathbf{Z}</math> under the current parameter estimate <math>\theta^{(t)}</math> by multiplying both sides by <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)})</math> and summing (or integrating) over <math>\mathbf{Z}</math>. The left-hand side is the expectation of a constant, so we get:
+
&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}} \ \left\{ \left[  \sum_{i=1}^n T_{1,i}^{(t)} \right] \log \tau_1 + \left[  \sum_{i=1}^n T_{2,i}^{(t)} \right] \log \tau_2  \right\}.
   −
在当前参数估计 math  theta ^ {(t)} / math 下,我们用 math p (mathbf { z } mid  mathbf { x } ,boldsymbol  theta ^ {(t)} / math 和(或积分)在 mathbf { z } / math 上乘以对未知数 math { z } / math 可能值的期望值。左边是一个常数的期望值,所以我们得到:
+
左{左[ sum _ { i = 1} ^ n t _ {1,i }{(t)}右] log tau _ 1 + 左[ sum _ { i = 1} ^ n t _ {2,i }{(t)}右] log tau _ 2。
   −
::<math>
+
: <math>
   −
<math>
+
\end{align}</math>
   −
数学
+
结束{ align } </math >
    
\begin{align}
 
\begin{align}
   −
\begin{align}
+
This has the same form as the MLE for the binomial distribution, so
   −
Begin { align }
+
这与二项分布的 MLE 形式相同,所以
 
  −
\log p(\mathbf{X}\mid\boldsymbol\theta) &
      
\log p(\mathbf{X}\mid\boldsymbol\theta) &
 
\log p(\mathbf{X}\mid\boldsymbol\theta) &
   −
Log p ( mathbf { x } mid  boldsymbol  theta) &
+
<math>\tau^{(t+1)}_j = \frac{\sum_{i=1}^n T_{j,i}^{(t)}}{\sum_{i=1}^n (T_{1,i}^{(t)} + T_{2,i}^{(t)} ) } = \frac{1}{n} \sum_{i=1}^n T_{j,i}^{(t)}.</math>
   −
= \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta)
+
< math > tau ^ {(t + 1)} _ j = frac { sum _ { i = 1} ^ n t _ { j,i }{ i = 1} ^ n (t _ {1,i }{(t)} + t _ {2,i }{(t)})} = frac { n } sum _ { i = 1} n t _ { j,i }{(t)} </math >
    
= \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta)
 
= \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta)
  −
[数学][数学][数学][数学][数学][数学][数学][数学][数学][数学]][数学][数学][数学][数学][数学][数学]][数学][数学]][数学][数学]][数学][数学][数学][数学][数学]][数学][数学][数学][数学][数学][数学][
      
- \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\
 
- \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\
   −
- \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\
+
For the next estimates of (μ<sub>1</sub>,Σ<sub>1</sub>):
   −
- sum { z } p ( mathbf { z } mid  mathbf { x } , boldsymbol  theta ^ (t)}) log p ( mathbf { z } mid  mathbf { x } , boldsymbol  theta)
+
下一个估计(μ < sub > 1 </sub > ,σ < sub > 1 </sub >) :
   −
& = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \,,
+
& = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}),
   −
& = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \,,
+
<math>\begin{align}(\boldsymbol{\mu}_1^{(t+1)},\Sigma_1^{(t+1)})
   −
& q ( boldsymbol  theta  mid  boldsymbol  theta ^ {(t)}) + h ( boldsymbol  theta  mid  boldsymbol  theta ^ (t)}) ,
+
< math > begin { align }(粗体符号{ mu } _ 1 ^ {(t + 1)} ,Sigma _ 1 ^ {(t + 1)})
    
\end{align}
 
\end{align}
   −
\end{align}
+
&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} Q(\theta \mid \theta^{(t)} ) \\
   −
end { align }
+
1,Sigma 1} operatorname { arg,max } q (theta mid theta ^ {(t)})
    
</math>
 
</math>
   −
</math>
+
&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} \sum_{i=1}^n T_{1,i}^{(t)} \left\{ -\tfrac{1}{2} \log |\Sigma_1| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_1)^\top\Sigma_1^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_1) \right\}
   −
数学
+
下设{ boldsymbol { mu } _ 1,Sigma _ 1}操作符{ arg,max } sum _ { i = 1} ^ n t _ {1,i } ^ {(t)}左{-tfrac {1}{2} log | Sigma _ 1 |-tfrac {1}{2}(mathbf { x } i-boldsymbol { mu } _ 1) ^ top _ 1 ^ {-1}({ x } i-符号{ mu _ 1)右}
    
where <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> is defined by the negated sum it is replacing.
 
where <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> is defined by the negated sum it is replacing.
   −
where <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> is defined by the negated sum it is replacing.
+
\end{align}.</math>
   −
其中 math h ( boldsymbol  theta  mid  boldsymbol  theta ^ {(t)}) / math 由它所替换的负和定义。
+
结束{ align } . </math >
    
This last equation holds for every value of <math>\boldsymbol\theta</math> including <math>\boldsymbol\theta = \boldsymbol\theta^{(t)}</math>,
 
This last equation holds for every value of <math>\boldsymbol\theta</math> including <math>\boldsymbol\theta = \boldsymbol\theta^{(t)}</math>,
   −
This last equation holds for every value of <math>\boldsymbol\theta</math> including <math>\boldsymbol\theta = \boldsymbol\theta^{(t)}</math>,
+
This has the same form as a weighted MLE for a normal distribution, so
   −
最后一个方程适用于 math  boldsymbol  theta / math 的每个值,包括 math  boldsymbol  theta  boldsymbol  theta ^ {(t)} / math,
+
这与正态分布的加权极大似然估计形式相同,因此
 
  −
::<math>
     −
<math>
+
: <math>
   −
数学
+
<math>\boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}} </math> and <math>\Sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} </math>
   −
\log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
+
< math > boldsymbol { mu } _ 1 ^ {(t + 1)} = frac { sum _ { i = 1} ^ n t _ {1,i }{ x } _ i }{ sum _ { i = 1} n t _ {1,i }{ i } ^ {数学 > < math > Sigma _ 1 ^ {(t + 1)} = frac { sum _ i = 1} n t _ {1,i }{(t)}(mathbf { x } _ i-boldsymbol { mu } _ 1 ^ {(t + 1)){(t + 1)})(mathbf { x } i-boldsymbol { mu }1 ^ {(t + 1)}){(t + 1)}) ^ top }{ sum { i = 1} ^ n t _ {1,i }{(t)}}} </math >
    
\log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
 
\log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
   −
Log p ( mathbf { x } mid  boldsymbol  theta ^ {(t)})
+
and, by symmetry,
   −
= Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,,
+
通过对称,
   −
= Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,,
+
= Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}),
   −
Q ( boldsymbol theta ^ {(t)} mid  boldsymbol  theta ^ {(t)}) + h ( boldsymbol theta ^ {(t)} mid  boldsymbol theta ^ (t)}) ,
+
<math>\boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}} </math> and <math>\Sigma_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)})^\top }{\sum_{i=1}^n T_{2,i}^{(t)}}.</math>
   −
</math>
+
句子太长,请提供一个短句
    
</math>
 
</math>
  −
数学
      
and subtracting this last equation from the previous equation gives
 
and subtracting this last equation from the previous equation gives
   −
and subtracting this last equation from the previous equation gives
+
: <math>
 
  −
然后从前一个方程中减去最后一个方程
  −
 
  −
::<math>
  −
 
  −
<math>
  −
 
  −
数学
      
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
 
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
   −
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
+
Conclude the iterative process if <math> E_{Z\mid\theta^{(t)},\mathbf{x}}[\log L(\theta^{(t)};\mathbf{x},\mathbf{Z})] \leq E_{Z\mid\theta^{(t-1)},\mathbf{x}}[\log L(\theta^{(t-1)};\mathbf{x},\mathbf{Z})] + \varepsilon</math> for <math> \varepsilon </math> below some preset threshold.
 
  −
Log p ( mathbf { x } mid  boldsymbol  theta)- log p ( mathbf { x } mid  boldsymbol  theta (t)})
     −
= Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})
+
如果 < math > e _ { z mid theta ^ {(t)} ,mathbf { x }[ log l (theta ^ {(t)} ; mathbf { x } ,mathbf { z })] leq e _ { z mid theta ^ {(t-1)} ,mathbf { x }[ log l (theta ^ {(t-1)} ; mathbf { x } ,z })] + varepsillon </math > </math > < < < </math > > varepsilon </math > > > > 。
    
= Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})
 
= Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})
   −
Q ( boldsymbol  theta  mid  boldsymbol  theta ^ {(t)})-q ( boldsymbol  theta ^ {(t)} mid  boldsymbol  theta ^ (t)})
+
  + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}),
 
  −
  + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,,
  −
 
  −
+ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,,
  −
 
  −
+ h ( boldsymbol  theta  mid  boldsymbol  theta ^ {(t)})-h ( boldsymbol  theta ^ (t)} mid  boldsymbol  theta ^ (t)}) ,
      
</math>
 
</math>
   −
</math>
+
However, [[Gibbs' inequality]] tells us that <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})</math>, so we can conclude that
   −
数学
+
The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.
   −
However, [[Jensen's inequality]] tells us that <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})</math>, so we can conclude that
+
上述算法可以推广到多于两个多元正态分布的混合情况。
 
  −
However, Jensen's inequality tells us that <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})</math>, so we can conclude that
  −
 
  −
然而,Jensen 不等式告诉我们,数学 h ( boldsymbol  theta  mid  theta ^ (t)}) ge h ( boldsymbol  theta ^ (t)} mid  boldsymbol  theta ^ (t)}) / math,所以我们可以得出这样的结论:
  −
 
  −
::<math>
  −
 
  −
<math>
  −
 
  −
数学
     −
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
+
: <math>
    
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
 
\log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})
   −
Log p ( mathbf { x } mid  boldsymbol  theta)- log p ( mathbf { x } mid  boldsymbol  theta (t)})
+
\ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}).
 
  −
\ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,.
  −
 
  −
\ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,.
  −
 
  −
Ge q ( boldsymbol  theta  mid  boldsymbol  theta ^ {(t)})-q ( boldsymbol  theta ^ (t)} mid  boldsymbol  theta ^ (t)}) ,.
  −
 
  −
</math>
      
</math>
 
</math>
   −
数学
+
The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.
   −
In words, choosing <math>\boldsymbol\theta</math> to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> causes <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math> to improve at least as much.
+
算法已经实现在这样的情况下,一个基础的线性回归/模型存在解释一些数量的变化,但实际观察到的数值是审查或截断版本的那些表示在模型中。
    
In words, choosing <math>\boldsymbol\theta</math> to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> causes <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math> to improve at least as much.
 
In words, choosing <math>\boldsymbol\theta</math> to improve <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> causes <math>\log p(\mathbf{X}\mid\boldsymbol\theta)</math> to improve at least as much.
  −
换句话说,选择 math  boldsymbol  theta / math 来改进数学 q ( boldsymbol  theta mid  theta ^ (t)) / math 导致 math  log p ( mathbf  mid  boldsymbol  theta) / math 改进至少同样多。
        第673行: 第577行:  
== As a maximization–maximization procedure ==
 
== As a maximization–maximization procedure ==
   −
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of [[coordinate descent|coordinate ascent]].<ref name="neal1999">{{cite book|last1=Neal |first=Radford |last2=Hinton |first2=Geoffrey |author-link2=Geoffrey Hinton |title=A view of the EM algorithm that justifies incremental, sparse, and other variants |journal=Learning in Graphical Models |editor=[[Michael I. Jordan]] |pages= 355–368 |publisher= MIT Press |location=Cambridge, MA |year=1999 |isbn=978-0-262-60032-3 |url=ftp://ftp.cs.toronto.edu/pub/radford/emk.pdf |accessdate=2009-03-22}}</ref><ref name="hastie2001">{{cite book|last1=Hastie |first1= Trevor|author-link1=Trevor Hastie|last2=Tibshirani|first2=Robert|author-link2=Robert Tibshirani|last3=Friedman|first3=Jerome |year=2001 |title=The Elements of Statistical Learning |isbn=978-0-387-95284-0 |publisher=Springer |location=New York |chapter=8.5 The EM algorithm |pages=236–243}}</ref> Consider the function:
+
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
   −
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate ascent. Consider the function:
+
EM 通常收敛到一个局部最优解,而不一定是全局最优解,收敛速度一般没有界限。这是可能的,它可以任意贫穷的高维,并可能有一个指数数目的局部最优化。因此,需要其他方法来保证学习,特别是在高维度的设置。有更好的一致性保证的 EM 替代方法存在,这被称为基于时刻的方法或所谓的光谱技术。基于矩的概率模型参数学习方法近年来越来越受到人们的关注,因为它们在一定条件下具有全局收敛性等保证。具有学习保证的算法可以推导出一些重要的模型,如混合模型、隐马尔可夫模型等。对于这些谱方法,没有出现伪局部最优,并且在一定的正则性条件下可以一致地估计真实参数。
   −
Em 算法可以看作是两个交替的最大化步骤,即作为协调上升的一个例子。考虑一下功能:
+
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of [[coordinate descent]].<ref name="neal1999">{{cite book|last1=Neal |first=Radford |last2=Hinton |first2=Geoffrey |author-link2=Geoffrey Hinton |title=A view of the EM algorithm that justifies incremental, sparse, and other variants |journal=Learning in Graphical Models |editor=[[Michael I. Jordan]] |pages= 355–368 |publisher= MIT Press |location=Cambridge, MA |year=1999 |isbn=978-0-262-60032-3 |url=ftp://ftp.cs.toronto.edu/pub/radford/emk.pdf |accessdate=2009-03-22}}</ref><ref name="hastie2001">{{cite book|last1=Hastie |first1= Trevor|author-link1=Trevor Hastie|last2=Tibshirani|first2=Robert|author-link2=Robert Tibshirani|last3=Friedman|first3=Jerome |year=2001 |title=The Elements of Statistical Learning |url=https://archive.org/details/elementsstatisti00thas_842 |url-access=limited |isbn=978-0-387-95284-0 |publisher=Springer |location=New York |chapter=8.5 The EM algorithm |pages=[https://archive.org/details/elementsstatisti00thas_842/page/n237 236]–243}}</ref> Consider the function:
    
:<math> F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), </math>  
 
:<math> F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), </math>  
  −
<math> F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), </math>
  −
  −
Math f (q, theta) :  operatorname { e } q [ log l ( theta; x,z)] + h (q) ,/ math
      
where ''q'' is an arbitrary probability distribution over the unobserved data ''z'' and ''H(q)'' is the [[Entropy (information theory)|entropy]] of the distribution ''q''. This function can be written as
 
where ''q'' is an arbitrary probability distribution over the unobserved data ''z'' and ''H(q)'' is the [[Entropy (information theory)|entropy]] of the distribution ''q''. This function can be written as
  −
where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as
  −
  −
其中 q 是未观测数据 z 上的任意概率分布,h (q)是分布 q 的熵。这个函数可以写成
      
:<math> F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), </math>
 
:<math> F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), </math>
  −
<math> F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), </math>
  −
  −
Math f (q,theta)-d mathrm { KL } big (q  parallel p { z  mid x; theta) big) + log l ( theta; x) ,/ math
      
where  <math>p_{Z\mid X}(\cdot\mid x;\theta )</math> is the conditional distribution of the unobserved data given the observed data <math>x</math> and <math>D_{KL}</math> is the [[Kullback–Leibler divergence]].
 
where  <math>p_{Z\mid X}(\cdot\mid x;\theta )</math> is the conditional distribution of the unobserved data given the observed data <math>x</math> and <math>D_{KL}</math> is the [[Kullback–Leibler divergence]].
  −
where  <math>p_{Z\mid X}(\cdot\mid x;\theta )</math> is the conditional distribution of the unobserved data given the observed data <math>x</math> and <math>D_{KL}</math> is the Kullback–Leibler divergence.
  −
  −
其中数学 p { z  mid x }( cdot  mid x;  theta) / math 是给定观测数据的数学 x / math 和数学 d { KL } / math 的未观测数据的条件分布,称为 Kullback-Leibler 散度。
            
Then the steps in the EM algorithm may be viewed as:
 
Then the steps in the EM algorithm may be viewed as:
  −
Then the steps in the EM algorithm may be viewed as:
  −
  −
然后 EM 算法中的步骤可以被看作:
      
:''Expectation step'': Choose <math>q</math> to maximize <math>F</math>:
 
:''Expectation step'': Choose <math>q</math> to maximize <math>F</math>:
  −
Expectation step: Choose <math>q</math> to maximize <math>F</math>:
  −
  −
期望步骤: 选择 math q / math 来最大化 math f / math:
      
::<math> q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) </math>
 
::<math> q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) </math>
  −
<math> q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) </math>
  −
  −
数学 q ^ {(t)} operatorname { arg  ,max } q  f (q, theta ^ {(t)}) / math
      
:''Maximization step'': Choose <math>\theta</math> to maximize <math>F</math>:
 
:''Maximization step'': Choose <math>\theta</math> to maximize <math>F</math>:
  −
Maximization step: Choose <math>\theta</math> to maximize <math>F</math>:
  −
  −
最大化步骤: 选择 math  theta / math 来最大化 math f / math:
      
::<math> \theta^{(t+1)} = \operatorname{arg\,max}_\theta \ F(q^{(t)},\theta) </math>
 
::<math> \theta^{(t+1)} = \operatorname{arg\,max}_\theta \ F(q^{(t)},\theta) </math>
  −
<math> \theta^{(t+1)} = \operatorname{arg\,max}_\theta \ F(q^{(t)},\theta) </math>
  −
  −
Math  theta ^ {(t + 1)} operatorname { arg  ,max } theta  f (q ^ {(t)} , theta) / math
        第740行: 第608行:     
EM is frequently used for [[data clustering]] in [[machine learning]] and [[computer vision]]. In [[natural language processing]], two prominent instances of the algorithm are the [[Baum–Welch algorithm]] for [[hidden Markov models]], and the [[inside-outside algorithm]] for unsupervised induction of [[probabilistic context-free grammar]]s.
 
EM is frequently used for [[data clustering]] in [[machine learning]] and [[computer vision]]. In [[natural language processing]], two prominent instances of the algorithm are the [[Baum–Welch algorithm]] for [[hidden Markov models]], and the [[inside-outside algorithm]] for unsupervised induction of [[probabilistic context-free grammar]]s.
  −
EM is frequently used for data clustering in machine learning and computer vision. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars.
  −
  −
Em 算法是机器学习和计算机视觉中常用的数据聚类算法。在自然语言处理中,该算法的两个突出实例是用于隐马尔可夫模型的 Baum-Welch 算法和用于概率上下文无关文法的无监督归纳的内外部算法。
        第749行: 第613行:  
EM is frequently used for parameter estimation of [[mixed model]]s,<ref>{{cite journal |doi=10.1080/01621459.1988.10478693 |title=Newton—Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data |journal=Journal of the American Statistical Association |volume=83 |issue=404 |pages=1014 |year=1988 |last1=Lindstrom |first1=Mary J |last2=Bates |first2=Douglas M }}</ref><ref>{{cite journal |doi=10.2307/1390614 |jstor=1390614 |title=Fitting Mixed-Effects Models Using Efficient EM-Type Algorithms |journal=Journal of Computational and Graphical Statistics |volume=9 |issue=1 |pages=78–98 |year=2000 |last1=Van Dyk |first1=David A }}</ref> notably in [[quantitative genetics]].<ref>{{cite journal |doi=10.1111/anzs.12208 |title=A new REML (parameter expanded) EM algorithm for linear mixed models |journal=Australian & New Zealand Journal of Statistics |volume=59 |issue=4 |pages=433 |year=2017 |last1=Diffey |first1=S. M |last2=Smith |first2=A. B |last3=Welsh |first3=A. H |last4=Cullis |first4=B. R |doi-access=free }}</ref>
 
EM is frequently used for parameter estimation of [[mixed model]]s,<ref>{{cite journal |doi=10.1080/01621459.1988.10478693 |title=Newton—Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data |journal=Journal of the American Statistical Association |volume=83 |issue=404 |pages=1014 |year=1988 |last1=Lindstrom |first1=Mary J |last2=Bates |first2=Douglas M }}</ref><ref>{{cite journal |doi=10.2307/1390614 |jstor=1390614 |title=Fitting Mixed-Effects Models Using Efficient EM-Type Algorithms |journal=Journal of Computational and Graphical Statistics |volume=9 |issue=1 |pages=78–98 |year=2000 |last1=Van Dyk |first1=David A }}</ref> notably in [[quantitative genetics]].<ref>{{cite journal |doi=10.1111/anzs.12208 |title=A new REML (parameter expanded) EM algorithm for linear mixed models |journal=Australian & New Zealand Journal of Statistics |volume=59 |issue=4 |pages=433 |year=2017 |last1=Diffey |first1=S. M |last2=Smith |first2=A. B |last3=Welsh |first3=A. H |last4=Cullis |first4=B. R |doi-access=free }}</ref>
   −
EM is frequently used for parameter estimation of mixed models, notably in quantitative genetics.
     −
Em 经常用于混合模型的参数估计,尤其是在数量遗传学。
      +
|last1 = Bishop |first1 = Christopher M.
    +
1 = Bishop | first1 = Christopher m.
    
In [[psychometrics]], EM is almost indispensable for estimating item parameters and latent abilities of [[item response theory]] models.
 
In [[psychometrics]], EM is almost indispensable for estimating item parameters and latent abilities of [[item response theory]] models.
   −
In psychometrics, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models.
+
|author-link = Christopher Bishop
 +
 
 +
克里斯托弗 · 毕晓普
   −
在心理测量学中,EM 对于估计项目反应理论模型的项目参数和潜在能力几乎是不可或缺的。
         +
|title = Pattern Recognition and Machine Learning
 +
 +
| title = 模式识别和机器学习
    
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.{{Citation needed|date=November 2017}}
 
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.{{Citation needed|date=November 2017}}
   −
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.
+
|year = 2006
 +
 
 +
2006年
 +
 
   −
由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的工具,价格和管理投资组合的风险。
      +
|publisher = Springer
    +
| publisher = Springer
    
The EM algorithm (and its faster variant [[ordered subset expectation maximization]]) is also widely used in [[medical imaging|medical image]] reconstruction, especially in [[positron emission tomography]], [[single photon emission computed tomography]], and x-ray [[computed tomography]]. See below for other faster variants of EM.
 
The EM algorithm (and its faster variant [[ordered subset expectation maximization]]) is also widely used in [[medical imaging|medical image]] reconstruction, especially in [[positron emission tomography]], [[single photon emission computed tomography]], and x-ray [[computed tomography]]. See below for other faster variants of EM.
   −
The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM.
+
|ref = CITEREFBishop2006
   −
Em 算法(及其快速变化的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射计算机断层扫描、单光子发射 X射线计算机断层成像和 X射线计算机断层成像中。下面是 EM 的其他更快的变体。
+
2006
         −
In [[structural engineering]], the Structural Identification using Expectation Maximization (STRIDE) <ref>Matarazzo, T. J., and Pakzad, S. N. (2016). “STRIDE for Structural Identification using Expectation Maximization: Iterative Output-Only Method for Modal Identification.” Journal of Engineering Mechanics.http://ascelibrary.org/doi/abs/10.1061/(ASCE)EM.1943-7889.0000951</ref> algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see [[Operational Modal Analysis]]).
+
  |isbn = 978-0-387-31073-2
 +
 
 +
| isbn = 978-0-387-31073-2
   −
In structural engineering, the Structural Identification using Expectation Maximization (STRIDE)   algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis).
+
In [[structural engineering]], the Structural Identification using Expectation Maximization (STRIDE)<ref>Matarazzo, T. J., and Pakzad, S. N. (2016). “STRIDE for Structural Identification using Expectation Maximization: Iterative Output-Only Method for Modal Identification.” Journal of Engineering Mechanics.http://ascelibrary.org/doi/abs/10.1061/(ASCE)EM.1943-7889.0000951</ref> algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see [[Operational Modal Analysis]]).
   −
在结构工程中,利用期望最大化(STRIDE)算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)。
+
}}
 +
 
 +
}}
      第790行: 第666行:     
A [[Kalman filter]] is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
 
A [[Kalman filter]] is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
  −
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
  −
  −
卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批处理状态估计。然而,这些最小方差解需要状态空间模型参数的估计。Em 算法可用于求解状态和参数的联合估计问题。
  −
        −
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
      
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
 
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
  −
滤波和平滑 EM 算法是通过重复这两个步骤产生的:
            
;E-step
 
;E-step
  −
E-step
  −
  −
E-step
      
: Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
 
: Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
  −
Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
  −
  −
操作 Kalman 滤波器或最小方差平滑设计与当前的参数估计,以获得更新的状态估计。
            
;M-step
 
;M-step
  −
M-step
  −
  −
M-step
      
: Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
 
: Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
  −
Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
  −
  −
使用最大似然计算中的滤波或平滑状态估计来获得更新的参数估计。
        第835行: 第687行:  
Suppose that a [[Kalman filter]] or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the [[maximum likelihood]] calculation
 
Suppose that a [[Kalman filter]] or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the [[maximum likelihood]] calculation
   −
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
+
: <math>\widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2,</math>
 
  −
假设卡尔曼滤波器或最小方差平滑器对具有附加白噪声的单输入单输出系统进行测量。通过极大似然估计,可以得到更新的测量噪声方差估计
  −
 
  −
:<math>\widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2</math>
  −
 
  −
<math>\widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2</math>
  −
 
  −
数学广义标准差 ^ 2 v frac {1}{ n }{ k 1} ^ n {(z k-(广义标准差)}} ^ 2 / math
            
where <math>\widehat{x}_k</math> are scalar output estimates calculated by a filter or a smoother from N scalar measurements <math>z_k</math>. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
 
where <math>\widehat{x}_k</math> are scalar output estimates calculated by a filter or a smoother from N scalar measurements <math>z_k</math>. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
  −
where <math>\widehat{x}_k</math> are scalar output estimates calculated by a filter or a smoother from N scalar measurements <math>z_k</math>. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
  −
  −
其中 math  widehat { x } k / math 是由一个过滤器或 n 个标量测量的平滑器计算的标量输出估计值。上述更新也可以应用于泊松测量噪声强度的更新。同样,对于一阶自回归过程,更新后的过程噪声方差估计可以通过
  −
  −
:<math>\widehat{\sigma}^2_w =  \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat{{x}}_k)}^2</math>
  −
  −
<math>\widehat{\sigma}^2_w =  \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat_k)}^2</math>
  −
  −
数学广义 σ ^ 2 w | frac {1}{ n }{ k 1} ^ n {((广义 σ){ k + 1}-(广义 σ){ k)}} ^ 2 / math
  −
  −
  −
  −
where <math>\widehat{x}_k</math> and <math>\widehat{x}_{k+1}</math> are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
  −
  −
where <math>\widehat{x}_k</math> and <math>\widehat{x}_{k+1}</math> are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
  −
  −
其中 math 和 math 是由滤子或平滑器计算的标量状态估计。通过对模型系数的修正,得到了模型系数的修正估计
  −
  −
:<math>\widehat{F} = \frac{\sum_{k=1}^N (\widehat{x}_{k+1}-\widehat{F} \widehat{x}_k)}{\sum_{k=1}^N \widehat{x}_k^2}. </math>
  −
  −
<math>\widehat{F} = \frac{\sum_{k=1}^N (\widehat{x}_{k+1}-\widehat{F} \widehat{x}_k)}{\sum_{k=1}^N \widehat{x}_k^2}. </math>
  −
  −
广义数学{ f } frac { k 1} ^ n ( widehat { k + 1}- widehat { f } k)}{ k 1} ^ n  widehat { x } ^ 2}。数学
  −
  −
  −
  −
The convergence of parameter estimates such as those above are well studied.<ref>{{Cite journal
  −
  −
The convergence of parameter estimates such as those above are well studied.<ref>{{Cite journal
  −
  −
参数估计的收敛性,例如上面那些已经被很好地研究过了
  −
  −
|last = Einicke
  −
  −
|last = Einicke
  −
  −
|last = Einicke
  −
  −
|first =  G.A.
  −
  −
|first =  G.A.
  −
  −
首先是 g.a。
  −
  −
|last2 = Malos
  −
  −
|last2 = Malos
  −
  −
2 Malos
  −
  −
|first2 = J.T.
  −
  −
|first2 = J.T.
  −
  −
首先是 j.t。
  −
  −
|last3 = Reid
  −
  −
|last3 = Reid
  −
  −
3 Reid
  −
  −
|first3 =  D.C.
  −
  −
|first3 =  D.C.
  −
  −
首先是华盛顿特区。
  −
  −
|last4 = Hainsworth
  −
  −
|last4 = Hainsworth
  −
  −
最后4海恩斯沃思
  −
  −
|first4 = D.W.
  −
  −
|first4 = D.W.
  −
  −
首先是华盛顿特区。
  −
  −
|title = Riccati Equation and EM Algorithm Convergence for Inertial Navigation Alignment
  −
  −
|title = Riccati Equation and EM Algorithm Convergence for Inertial Navigation Alignment
  −
  −
Riccati方程和 EM 算法在惯导对准中的收敛
  −
  −
|journal = IEEE Trans. Signal Process.
  −
  −
|journal = IEEE Trans. Signal Process.
  −
  −
美国电气和电子工程师协会杂志。信号处理。
  −
  −
|volume = 57
  −
  −
|volume = 57
  −
  −
第57卷
  −
  −
|issue = 1
  −
  −
|issue = 1
  −
  −
第一期
  −
  −
|pages = 370–375
  −
  −
|pages = 370–375
  −
  −
第370-375页
  −
  −
|date=January 2009
  −
  −
|date=January 2009
  −
  −
2009年1月
  −
  −
|postscript = <!--None-->
  −
  −
|postscript = <!--None-->
  −
  −
| 附言! ——不——
  −
  −
|doi = 10.1109/TSP.2008.2007090
  −
  −
|doi = 10.1109/TSP.2008.2007090
  −
  −
| doi 10.1109 / TSP. 2008.2007090
  −
  −
|bibcode = 2009ITSP...57..370E
  −
  −
|bibcode = 2009ITSP...57..370E
  −
  −
| bibcode 2009ITSP... 57. . 370 e
  −
  −
}}</ref><ref>{{Cite journal
  −
  −
}}</ref><ref>{{Cite journal
  −
  −
} / ref { Cite journal
  −
  −
|last = Einicke
  −
  −
|last = Einicke
  −
  −
|last = Einicke
  −
  −
|first =  G.A.
  −
  −
|first =  G.A.
  −
  −
首先是 g.a。
  −
  −
|last2 = Falco
  −
  −
|last2 = Falco
  −
  −
2 Falco
  −
  −
|first2 =  G.
  −
  −
|first2 =  G.
  −
  −
| first2 g.
  −
  −
|last3 = Malos
  −
  −
|last3 = Malos
  −
  −
3 Malos
  −
  −
|first3 = J.T.
  −
  −
|first3 = J.T.
  −
  −
首先是 j.t。
  −
  −
|title = EM Algorithm State Matrix Estimation for Navigation
  −
  −
|title = EM Algorithm State Matrix Estimation for Navigation
  −
  −
导航中的 EM 算法状态矩阵估计
  −
  −
|journal = IEEE Signal Processing Letters
  −
  −
|journal = IEEE Signal Processing Letters
  −
  −
信号处理快报
  −
  −
|volume = 17
  −
  −
|volume = 17
  −
  −
第17卷
  −
  −
|issue = 5
  −
  −
|issue = 5
  −
  −
第五期
  −
  −
|pages = 437–440
  −
  −
|pages = 437–440
  −
  −
第437-440页
  −
  −
|date=May 2010
  −
  −
|date=May 2010
  −
  −
2010年5月
  −
  −
|postscript = <!--None-->
  −
  −
|postscript = <!--None-->
  −
  −
| 附言! ——不——
  −
  −
|doi = 10.1109/LSP.2010.2043151
  −
  −
|doi = 10.1109/LSP.2010.2043151
  −
  −
| doi 10.1109 / LSP. 2010.2043151
  −
  −
|bibcode = 2010ISPL...17..437E }}</ref><ref>{{Cite journal
  −
  −
|bibcode = 2010ISPL...17..437E }}</ref><ref>{{Cite journal
  −
  −
2010ISPL... 17. . 437 e } / ref { Cite journal
  −
  −
|last = Einicke
  −
  −
|last = Einicke
  −
  −
|last = Einicke
  −
  −
|first =  G.A.
  −
  −
|first =  G.A.
  −
  −
首先是 g.a。
  −
  −
|last2 = Falco
  −
  −
|last2 = Falco
  −
  −
2 Falco
  −
  −
|first2 =  G.
  −
  −
|first2 =  G.
  −
  −
| first2 g.
  −
  −
|last3 = Dunn
  −
  −
|last3 = Dunn
  −
  −
3 Dunn
  −
  −
|first3 = M.T.
  −
  −
|first3 = M.T.
  −
  −
第一天3点。
  −
  −
|last4 = Reid
  −
  −
|last4 = Reid
  −
  −
4 Reid
  −
  −
|first4 = D.C.
  −
  −
|first4 = D.C.
  −
  −
首先是华盛顿特区。
  −
  −
|title = Iterative Smoother-Based Variance Estimation
  −
  −
|title = Iterative Smoother-Based Variance Estimation
  −
  −
基于迭代平滑器的方差估计
  −
  −
|journal = IEEE Signal Processing Letters
  −
  −
|journal = IEEE Signal Processing Letters
  −
  −
信号处理快报
  −
  −
|volume = 19
  −
  −
|volume = 19
  −
  −
第19卷
  −
  −
|issue = 5
  −
  −
|issue = 5
  −
  −
第五期
  −
  −
|pages = 275–278
  −
  −
|pages = 275–278
  −
  −
第275-278页
  −
  −
|date=May 2012
  −
  −
|date=May 2012
  −
  −
2012年5月
  −
  −
|postscript = <!--None-->
  −
  −
|postscript = <!--None-->
  −
  −
| 附言! ——不——
  −
  −
|bibcode = 2012ISPL...19..275E
  −
  −
|bibcode = 2012ISPL...19..275E
  −
  −
| bibcode 2012ISPL... 19. . 275 e
  −
  −
|doi = 10.1109/LSP.2012.2190278
  −
  −
|doi = 10.1109/LSP.2012.2190278
  −
  −
| doi 10.1109 / LSP. 2012.2190278
  −
  −
}}</ref><ref>{{Cite journal
  −
  −
}}</ref><ref>{{Cite journal
  −
  −
} / ref { Cite journal
  −
  −
|last = Einicke
  −
  −
|last = Einicke
  −
  −
|last = Einicke
  −
  −
|first =  G.A.
  −
  −
|first =  G.A.
  −
  −
首先是 g.a。
  −
  −
|title = Iterative Filtering and Smoothing of Measurements Possessing Poisson Noise
  −
  −
|title = Iterative Filtering and Smoothing of Measurements Possessing Poisson Noise
  −
  −
具有泊松噪声的测量数据的迭代滤波和平滑
  −
  −
|journal = IEEE Transactions on Aerospace and Electronic Systems
  −
  −
|journal = IEEE Transactions on Aerospace and Electronic Systems
  −
  −
航空航天与电子系统杂志
  −
  −
|volume = 51
  −
  −
|volume = 51
  −
  −
第51卷
  −
  −
|issue = 3
  −
  −
|issue = 3
  −
  −
第三期
  −
  −
|pages = 2205–2011
  −
  −
|pages = 2205–2011
  −
  −
2205-2011页
  −
  −
|date=Sep 2015
  −
  −
|date=Sep 2015
  −
  −
日期: 2015年9月
  −
  −
|postscript = <!--None-->
  −
  −
|postscript = <!--None-->
  −
  −
| 附言! ——不——
  −
  −
|doi = 10.1109/TAES.2015.140843
  −
  −
|doi = 10.1109/TAES.2015.140843
  −
  −
10.1109 / TAES. 2015.140843
  −
  −
|bibcode = 2015ITAES..51.2205E
  −
  −
|bibcode = 2015ITAES..51.2205E
  −
  −
| bibcode 2015ITAES. . 51.2205 e
  −
  −
}}</ref>
  −
  −
}}</ref>
  −
  −
{} / ref
  −
  −
  −
  −
== Variants ==
  −
  −
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using [[conjugate gradient]] and modified [[Newton's method]]s (Newton–Raphson).<ref>{{cite journal |first1= Mortaza |last1=Jamshidian |first2=Robert I. |last2=Jennrich|title=Acceleration of the EM Algorithm by using Quasi-Newton Methods |year=1997 |journal=[[Journal of the Royal Statistical Society, Series B]] |volume=59 |issue=2 |pages=569–587 |doi=10.1111/1467-9868.00083 |mr=1452026 }}</ref> Also, EM can be used with constrained estimation methods.
  −
  −
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods.
  −
  −
为了提高 EM 算法的收敛速度,人们提出了许多方法,如共轭梯度法和修正牛顿法。此外,EM 还可以与约束估计方法结合使用。
  −
  −
  −
  −
''Parameter-expanded expectation maximization (PX-EM)'' algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".<ref>{{cite journal |doi=10.1093/biomet/85.4.755 |title=Parameter expansion to accelerate EM: The PX-EM algorithm |journal=Biometrika |volume=85 |issue=4 |pages=755–770 |year=1998 |last1=Liu |first1=C |citeseerx=10.1.1.134.9617 }}</ref>
  −
  −
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".
  −
  −
参数扩展期望最大化(PX-EM)算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。
  −
  −
  −
  −
''Expectation conditional maximization (ECM)'' replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter ''θ''<sub>''i''</sub> is maximized individually, conditionally on the other parameters remaining fixed.<ref>{{cite journal|last1=Meng  |first1= Xiao-Li |last2=Rubin |first2=Donald B. |author-link2=Donald Rubin |title=Maximum likelihood estimation via the ECM algorithm: A general framework |year=1993 |journal=[[Biometrika]] |volume=80 |issue=2 |pages=267–278 |doi=10.1093/biomet/80.2.267 |mr=1243503|url= https://semanticscholar.org/paper/721b25ffad2623a8d1e8044882f66e0dbe678f1d }}</ref> Itself can be extended into the ''Expectation conditional maximization either (ECME)'' algorithm.<ref>{{cite journal |doi= 10.1093/biomet/81.4.633|jstor=2337067 |title=The ECME Algorithm: A Simple Extension of EM and ECM with Faster Monotone Convergence |journal=Biometrika |volume=81 |issue=4 |pages=633 |year=1994 |last1=Liu |first1=Chuanhai |last2=Rubin |first2=Donald B }}</ref>
  −
  −
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θ<sub>i</sub> is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.
  −
  −
期望条件最大化(ECM)用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数子 i / 子单独最大化,条件是其他参数保持不变。本身也可以扩展为期望条件最大化(ECME)算法。
  −
  −
  −
  −
This idea is further extended in ''generalized expectation maximization (GEM)'' algorithm, in which is sought only an increase in the objective function ''F'' for both the E step and M step as described in the [[#As a maximization-maximization procedure|As a maximization-maximization procedure]] section.<ref name="neal1999"/> GEM is further developed in a distributed environment and shows promising results.<ref>
  −
  −
This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization-maximization procedure section. GEM is further developed in a distributed environment and shows promising results.<ref>
  −
  −
这一思想在广义期望最大化(GEM)算法中得到了进一步的推广,该算法只寻求 e 步和 m 步的目标函数 f 的增加,如最大化-最大化过程一节所述。在分布式环境下,GEM 得到了进一步的发展,并且显示了良好的结果
  −
  −
{{cite journal
  −
  −
{{cite journal
  −
  −
{引用期刊
  −
  −
|author1=Jiangtao Yin |author2=Yanfeng Zhang |author3=Lixin Gao |title= Accelerating Expectation-Maximization Algorithms with Frequent Updates
  −
  −
|author1=Jiangtao Yin |author2=Yanfeng Zhang |author3=Lixin Gao |title= Accelerating Expectation-Maximization Algorithms with Frequent Updates
  −
  −
1 jiang tao Yin | author2 Yanfeng Zhang | author3 Lixin Gao | title Accelerating Expectation-Maximization with Frequent Updates
  −
  −
|journal= Proceedings of the IEEE International Conference on Cluster Computing
  −
  −
|journal= Proceedings of the IEEE International Conference on Cluster Computing
  −
  −
美国电气和电子工程师协会集群计算国际会议记录
  −
  −
|year= 2012
  −
  −
|year= 2012
  −
  −
2012年
  −
  −
|url= http://rio.ecs.umass.edu/mnilpub/papers/cluster2012-yin.pdf
  −
  −
|url= http://rio.ecs.umass.edu/mnilpub/papers/cluster2012-yin.pdf
  −
  −
Http://rio.ecs.umass.edu/mnilpub/papers/cluster2012-yin.pdf
  −
  −
}}
  −
  −
}}
  −
  −
}}
  −
  −
</ref>
  −
  −
</ref>
  −
  −
/ 参考
  −
  −
  −
  −
It is also possible to consider the EM algorithm as a subclass of the '''[[MM algorithm|MM]]''' (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,<ref>Hunter DR and Lange K (2004), [http://www.stat.psu.edu/~dhunter/papers/mmtutorial.pdf A Tutorial on MM Algorithms], The American Statistician, 58: 30-37</ref> and therefore use any machinery developed in the more general case.
  −
  −
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.
  −
  −
也可以考虑将 EM 算法作为 MM (majorize / minize 或 minorize / maximize,取决于上下文)算法的子类,因此可以使用在更一般情况下开发的任何机制。
  −
  −
  −
  −
===α-EM algorithm===
  −
  −
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm
  −
  −
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm
  −
  −
Em 算法中使用的 q 函数是基于对数似然的。因此,该算法被称为 log-EM 算法。对数似然的使用可以推广到对数似然比的使用。然后,利用对数似然比和散度的 q 函数,将观测数据的对数似然比精确地表示为等式。获得这个 q 函数是一个广义的 e 步。它的最大化是一个广义的 m 步。这一对称为 -em 算法
  −
  −
<ref>
  −
  −
<ref>
  −
  −
裁判
  −
  −
{{cite journal
  −
  −
{{cite journal
  −
  −
{引用期刊
  −
  −
|last=Matsuyama |first=Yasuo
  −
  −
|last=Matsuyama |first=Yasuo
  −
  −
|last=Matsuyama |first=Yasuo
  −
  −
|title=The α-EM algorithm: Surrogate likelihood maximization using α-logarithmic information measures
  −
  −
|title=The α-EM algorithm: Surrogate likelihood maximization using α-logarithmic information measures
  −
  −
| title-em 算法: 使用-对数信息度量值进行代理似然最大化
  −
  −
|journal=IEEE Transactions on Information Theory
  −
  −
|journal=IEEE Transactions on Information Theory
  −
  −
美国电气和电子工程师协会信息理论杂志
  −
  −
|volume=49 | year=2003 |pages=692–706 |issue=3
  −
  −
|volume=49 | year=2003 |pages=692–706 |issue=3
  −
  −
2003年,第49卷,第692-706页,第3期
  −
  −
|doi=10.1109/TIT.2002.808105
  −
  −
|doi=10.1109/TIT.2002.808105
  −
  −
10.1109 / TIT. 2002.808105
  −
  −
}}
  −
  −
}}
  −
  −
}}
  −
  −
</ref>
  −
  −
</ref>
  −
  −
/ 参考
  −
  −
which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by [[Yasuo Matsuyama]] is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
  −
  −
which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
  −
  −
它包含 log-EM 算法作为其子类。因此,Matsuyama 提出的 -em 算法是 log-EM 算法的精确推广。不需要计算梯度或 Hessian 矩阵。与 log-EM 算法相比,em 算法通过选择合适的参数,具有更快的收敛速度。Em 算法是隐马尔可夫模型估计算法的一个更快的版本-hmm。
  −
  −
<ref>
  −
  −
<ref>
  −
  −
裁判
  −
  −
{{cite journal
  −
  −
{{cite journal
  −
  −
{引用期刊
  −
  −
|last=Matsuyama |first=Yasuo
  −
  −
|last=Matsuyama |first=Yasuo
  −
  −
|last=Matsuyama |first=Yasuo
  −
  −
|title=Hidden Markov model estimation based on alpha-EM algorithm: Discrete and continuous alpha-HMMs
  −
  −
|title=Hidden Markov model estimation based on alpha-EM algorithm: Discrete and continuous alpha-HMMs
  −
  −
| 基于 alpha-EM 算法的隐马尔可夫模型估计: 离散和连续 alpha-HMMs
  −
  −
|journal=International Joint Conference on Neural Networks
  −
  −
|journal=International Joint Conference on Neural Networks
  −
  −
| 国际神经网络联席会议
  −
  −
| year=2011 |pages=808–816
  −
  −
| year=2011 |pages=808–816
  −
  −
2011年,第808-816页
  −
  −
}}
  −
  −
}}
  −
  −
}}
  −
  −
</ref>
  −
  −
</ref>
  −
  −
/ 参考
  −
  −
  −
  −
== Relation to variational Bayes methods ==
  −
  −
EM is a partially non-Bayesian, maximum likelihood method.  Its final result gives a [[probability distribution]] over the latent variables (in the Bayesian style) together with a point estimate for ''θ'' (either a [[maximum likelihood estimation|maximum likelihood estimate]] or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over ''θ'' and the latent variables.  The Bayesian approach to inference is simply to treat ''θ'' as another latent variable.  In this paradigm, the distinction between the E and M steps disappears.  If using the factorized Q approximation as described above ([[variational Bayes]]), solving can iterate over each latent variable (now including ''θ'') and optimize them one at a time.  Now, ''k'' steps per iteration are needed, where ''k'' is the number of latent variables.  For [[graphical models]] this is easy to do as each variable's new ''Q'' depends only on its [[Markov blanket]], so local [[Message passing (disambiguation)|message passing]] can be used for efficient inference.
  −
  −
EM is a partially non-Bayesian, maximum likelihood method.  Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables.  The Bayesian approach to inference is simply to treat θ as another latent variable.  In this paradigm, the distinction between the E and M steps disappears.  If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time.  Now, k steps per iteration are needed, where k is the number of latent variables.  For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
  −
  −
Em 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了潜在变量的概率分布(贝叶斯风格)和点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本的这可能是需要的,给一个概率分布和潜在变量。贝叶斯推理方法简单地将其视为另一个潜在变量。在这个范例中,e 和 m 步骤之间的区别就消失了。如果使用上述的分解 q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括) ,并优化他们一次一个。现在,每次迭代需要 k 步,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 q 只依赖于它的马尔可夫包层,所以局部消息传递可以用于有效的推理。
  −
  −
  −
  −
== Geometric interpretation ==
  −
  −
{{details|Information geometry}}
  −
  −
In [[information geometry]], the E step and the M step are interpreted as projections under dual [[affine connection]]s, called the e-connection and the m-connection; the [[Kullback–Leibler divergence]] can also be understood in these terms.
  −
  −
In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
  −
  −
在信息几何中,e 步和 m 步被解释为双仿射联系下的投影,称为 e 联系和 m 联系; Kullback-Leibler 分歧也可以用这些术语来理解。
  −
  −
  −
  −
== Examples ==
  −
  −
=== Gaussian mixture === <!--This section is linked from [[Matrix calculus]] -->
  −
  −
=== Gaussian mixture === <!--This section is linked from Matrix calculus -->
  −
  −
高斯混合! -- 这一部分是从矩阵微积分中连接起来的 --
  −
  −
  −
  −
[[File:ClusterAnalysis_Mouse.svg|thumb|400px|Comparison of [[K-means clustering|k-means]] and EM on artificial data  visualized with [[Environment for DeveLoping KDD-Applications Supported by Index-Structures|ELKI]]. Using the variances, the EM algorithm can describe the normal distributions exactly, while k-means splits the data in [[Voronoi diagram|Voronoi]]-cells. The cluster center is indicated by the lighter, bigger symbol.]]
  −
  −
k-means and EM on artificial data  visualized with ELKI. Using the variances, the EM algorithm can describe the normal distributions exactly, while k-means splits the data in Voronoi-cells. The cluster center is indicated by the lighter, bigger symbol.]]
  −
  −
基于 ELKI 的人工数据的 k 均值和 EM 可视化。Em 算法利用方差精确地描述正态分布,而 k- 均值算法利用 voronoi 单元进行数据分割。集群中心由较轻,较大的符号表示。]
  −
  −
  −
  −
[[File:Em old faithful.gif|thumb|240px|An animation demonstrating the EM algorithm fitting a two component Gaussian [[mixture model]] to the [[Old Faithful]] dataset. The algorithm steps through from a random initialization to convergence. ]]
  −
  −
An animation demonstrating the EM algorithm fitting a two component Gaussian [[mixture model to the Old Faithful dataset. The algorithm steps through from a random initialization to convergence. ]]
  −
  −
一个动画演示的 EM 算法拟合一个双分量高斯[混合模型的老忠实数据集。该算法步骤从一个随机初始化到收敛。]
  −
  −
  −
  −
Let <math>\mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n)</math> be a sample of <math>n</math> independent observations from a [[mixture model|mixture]] of two [[multivariate normal distribution]]s of dimension <math>d</math>, and let <math>\mathbf{z} = (z_1,z_2,\ldots,z_n)</math> be the latent variables that determine the component from which the observation originates.<ref name="hastie2001"/>
  −
  −
Let <math>\mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n)</math> be a sample of <math>n</math> independent observations from a mixture of two multivariate normal distributions of dimension <math>d</math>, and let <math>\mathbf{z} = (z_1,z_2,\ldots,z_n)</math> be the latent variables that determine the component from which the observation originates.
  −
  −
设 math { x }( mathbf { x }1, mathbf { x }2, ldots, mathbf { x } n) / math 是来自维数 d / math 的两个多元正态分布的混合物的数学 n / 数学独立观测的样本,并且 math { z }(z 1,z 2, ldots,z n) / math 是确定观测所依据的潜变量的起源成分。
  −
  −
:<math>X_i \mid(Z_i = 1) \sim \mathcal{N}_d(\boldsymbol{\mu}_1,\Sigma_1)</math> and <math>X_i \mid(Z_i = 2) \sim \mathcal{N}_d(\boldsymbol{\mu}_2,\Sigma_2)</math>
  −
  −
<math>X_i \mid(Z_i = 1) \sim \mathcal{N}_d(\boldsymbol{\mu}_1,\Sigma_1)</math> and <math>X_i \mid(Z_i = 2) \sim \mathcal{N}_d(\boldsymbol{\mu}_2,\Sigma_2)</math>
  −
  −
数学 xi mid (z i 1) sim  mathcal n } d ( boldsymbol { mu }1, Sigma 1) / math 和 math xi mid (z i 2) sim  mathcal n } d ( boldsymbol { mu }2, Sigma 2) / math
  −
  −
where
  −
  −
where
  −
  −
在哪里
  −
  −
:<math>\operatorname{P} (Z_i = 1 ) = \tau_1 \, </math> and <math>\operatorname{P} (Z_i=2) = \tau_2 = 1-\tau_1</math>
  −
  −
<math>\operatorname{P} (Z_i = 1 ) = \tau_1 \, </math> and <math>\operatorname{P} (Z_i=2) = \tau_2 = 1-\tau_1</math>
  −
  −
数学运算名{ p }(z i 1) tau 1,/ math 和 math 运算名{ p }(z i 2) tau 21- tau 1 / math
  −
  −
  −
  −
The aim is to estimate the unknown parameters representing the ''mixing'' value between the Gaussians and the means and covariances of each:
  −
  −
The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:
  −
  −
其目的是估计代表高斯函数与每个函数的均值和协方差之间的混合值的未知参数:
  −
  −
:<math>\theta = \big( \boldsymbol{\tau},\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\Sigma_1,\Sigma_2 \big)</math>
  −
  −
<math>\theta = \big( \boldsymbol{\tau},\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\Sigma_1,\Sigma_2 \big)</math>
  −
  −
Math  theta  big ( boldsymbol { tau, boldsymbol { mu }1, boldsymbol { mu }2, Sigma 1, sigma2 big) / math
  −
  −
where the incomplete-data likelihood function is
  −
  −
where the incomplete-data likelihood function is
  −
  −
不完全数据似然函数在哪里
  −
  −
:<math>L(\theta;\mathbf{x}) =  \prod_{i=1}^n \sum_{j=1}^2 \tau_j \ f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j)</math>,
  −
  −
<math>L(\theta;\mathbf{x}) =  \prod_{i=1}^n \sum_{j=1}^2 \tau_j \ f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j)</math>,
  −
  −
数学 l ( theta;  mathbf { x }) prod { i ^ n  sum { j 1} ^ 2 tau j  f ( mathbf { x } i;  boldsymbol { mu } j, Sigma j) / math,
  −
  −
  −
  −
and the complete-data likelihood function is
  −
  −
and the complete-data likelihood function is
  −
  −
完全数据似然函数是
  −
  −
:<math>L(\theta;\mathbf{x},\mathbf{z}) = p(\mathbf{x},\mathbf{z} \mid \theta) = \prod_{i=1}^n  \prod_{j=1}^2  \ [f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) \tau_j] ^{\mathbb{I}(z_i=j)} </math>
  −
  −
<math>L(\theta;\mathbf{x},\mathbf{z}) = p(\mathbf{x},\mathbf{z} \mid \theta) = \prod_{i=1}^n  \prod_{j=1}^2  \ [f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) \tau_j] ^{\mathbb{I}(z_i=j)} </math>
  −
  −
数学 l (theta; mathbf { x } ,mathbf { z }) p (mathbf { x } ,mathbf { z } mid  theta) prod ^ n  prod ^ 2[ f (mathbf { x } i; boldsymbol { mu } ,Sigma  j){ mathbb { i }(z i j)} / math
  −
  −
  −
  −
or
  −
  −
or
  −
  −
  −
  −
  −
  −
:<math>L(\theta;\mathbf{x},\mathbf{z}) = \exp \left\{ \sum_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] \right\}. </math>
  −
  −
<math>L(\theta;\mathbf{x},\mathbf{z}) = \exp \left\{ \sum_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] \right\}. </math>
  −
  −
数学 l (theta; mathbf { x } , mathbf { z }) exp  sum { i } ^ 2 mathbb { i }(z i j) big [ log  tau j-tfrac } | | |-Sigma | tfrac {1}(mathbf { i-mu-boldsymbol { j)) top Sigma j-1}(mathi-boldsymbol { j)-framu-2)-big [ log | 2]。数学
  −
  −
  −
  −
where <math>\mathbb{I}</math> is an [[indicator function]] and <math>f</math> is the [[probability density function]] of a multivariate normal.
  −
  −
where <math>\mathbb{I}</math> is an indicator function and <math>f</math> is the probability density function of a multivariate normal.
  −
  −
其中 math  mathbb { i } / math 是指示函数,而 math f / math 是多元正态的概率密度函数。
  −
  −
  −
  −
In the last equality, for each {{math|''i''}}, one indicator <math>\mathbb{I}(z_i=j)</math> is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
  −
  −
In the last equality, for each , one indicator <math>\mathbb{I}(z_i=j)</math> is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
  −
  −
在最后一个等式中,对于每个等式,一个指示符 math  mathbb { i }(z i j) / math 等于零,一个指示符等于一。因此,内和归结为一项。
  −
  −
  −
  −
==== E step ====
  −
  −
  −
  −
Given our current estimate of the parameters ''θ''<sup>(''t'')</sup>, the conditional distribution of the ''Z''<sub>''i''</sub> is determined by [[Bayes theorem]] to be the proportional height of the normal [[probability density function|density]] weighted by ''τ'':
  −
  −
Given our current estimate of the parameters θ<sup>(t)</sup>, the conditional distribution of the Z<sub>i</sub> is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:
  −
  −
给出了参数 sup (t) / sup 的当前估计,z 子 i / sub 的条件分布用 Bayes 定理确定为正态密度的比例高度,加权值为:
  −
  −
:<math>T_{j,i}^{(t)} := \operatorname{P}(Z_i=j \mid X_i=\mathbf{x}_i ;\theta^{(t)}) = \frac{\tau_j^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_j^{(t)},\Sigma_j^{(t)})}{\tau_1^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_1^{(t)},\Sigma_1^{(t)}) + \tau_2^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_2^{(t)},\Sigma_2^{(t)})} </math>.
  −
  −
<math>T_{j,i}^{(t)} := \operatorname{P}(Z_i=j \mid X_i=\mathbf{x}_i ;\theta^{(t)}) = \frac{\tau_j^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_j^{(t)},\Sigma_j^{(t)})}{\tau_1^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_1^{(t)},\Sigma_1^{(t)}) + \tau_2^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_2^{(t)},\Sigma_2^{(t)})} </math>.
  −
  −
句子太长,请短一点。
  −
  −
  −
  −
These are called the "membership probabilities" which are normally considered the output of the E step (although this is not the Q function of below).
  −
  −
These are called the "membership probabilities" which are normally considered the output of the E step (although this is not the Q function of below).
  −
  −
这些被称为“成员概率” ,通常被认为是 e 步的输出(尽管这不是下面的 q 函数)。
  −
  −
  −
  −
This E step corresponds with setting up this function for Q:
  −
  −
This E step corresponds with setting up this function for Q:
  −
  −
这个 e 步对应于为 q 设置这个函数:
  −
  −
:<math>\begin{align}Q(\theta\mid\theta^{(t)})
  −
  −
<math>\begin{align}Q(\theta\mid\theta^{(t)})
  −
  −
开始{ align } q ( theta  mid  theta ^ {(t)})
  −
  −
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x},\mathbf{Z}) ] \\
  −
  −
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x},\mathbf{Z}) ] \\
  −
  −
运算器名称{ e }{ z } mid  mathbf { x } , maththeta }[ log l ( theta;  mathbf { x } , mathbf { z }]]
  −
  −
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log \prod_{i=1}^{n}L(\theta;\mathbf{x}_i,Z_i) ] \\
  −
  −
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log \prod_{i=1}^{n}L(\theta;\mathbf{x}_i,Z_i) ] \\
  −
  −
运算器名称{ e }{ z } mid  mathbf { x } , maththeta } ^ {(t)}[ log prod ^ { n } l ( theta;  mathbf { x } i,z i)]
  −
  −
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\sum_{i=1}^n \log L(\theta;\mathbf{x}_i,Z_i) ] \\
  −
  −
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\sum_{i=1}^n \log L(\theta;\mathbf{x}_i,Z_i) ] \\
  −
  −
运算器名称{ e }{ z } mid  mathf { x } , mathf { t } ^ {(t)}}[ sum { i ^ n  log l ( theta;  mathf { x } i,z i)]
  −
  −
&= \sum_{i=1}^n\operatorname{E}_{Z_i\mid\mathbf{X};\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x}_i,Z_i) ] \\
  −
  −
&= \sum_{i=1}^n\operatorname{E}_{Z_i\mid\mathbf{X};\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x}_i,Z_i) ] \\
  −
  −
[ log l ( theta;  mathbf { x } i,z i)]
  −
  −
&= \sum_{i=1}^n \sum_{j=1}^2 P(Z_i =j \mid X_i = \mathbf{x}_i; \theta^{(t)}) \log L(\theta_j;\mathbf{x}_i,j) \\
  −
  −
&= \sum_{i=1}^n \sum_{j=1}^2 P(Z_i =j \mid X_i = \mathbf{x}_i; \theta^{(t)}) \log L(\theta_j;\mathbf{x}_i,j) \\
  −
  −
(z) ^ n ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (t) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (x) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (t)
  −
  −
&= \sum_{i=1}^n \sum_{j=1}^2 T_{j,i}^{(t)} \big[ \log \tau_j  -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big]
  −
  −
&= \sum_{i=1}^n \sum_{j=1}^2 T_{j,i}^{(t)} \big[ \log \tau_j  -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big]
  −
  −
2 t,i } ^ (t) ^ (t) ^ (t) ^ (t) ^ (t) ^ (t) ^ (t) ^ (t))-(t) ^ (t)-(t) ^ (t) ^ (t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)
  −
  −
\end{align}</math>
  −
  −
\end{align}</math>
  −
  −
End { align } / math
  −
  −
The expectation of <math>\log L(\theta;\mathbf{x}_i,Z_i)</math> inside the sum is taken with respect to the probability density function <math>P(Z_i \mid X_i = \mathbf{x}_i; \theta^{(t)})</math>, which might be different for each  <math>\mathbf{x}_i</math> of the training set. Everything in the E step is known before the step is taken except <math>T_{j,i}</math>, which is computed according to the equation at the beginning of the E step section.
  −
  −
The expectation of <math>\log L(\theta;\mathbf{x}_i,Z_i)</math> inside the sum is taken with respect to the probability density function <math>P(Z_i \mid X_i = \mathbf{x}_i; \theta^{(t)})</math>, which might be different for each  <math>\mathbf{x}_i</math> of the training set. Everything in the E step is known before the step is taken except <math>T_{j,i}</math>, which is computed according to the equation at the beginning of the E step section.
  −
  −
求和中的 math  log l ( theta;  mathbf { x } i,z i) / math 的期望取自于概率密度函数数学 p (zmid x } i;  theta ^ {(t)}) / math,这对于训练集中的每个 mathbf { x } / math 可能是不同的。除了 math t { j,i } / math 之外,e 步中的所有内容在执行之前都是已知的,math 是根据 e 步段开头的方程计算的。
  −
  −
  −
  −
This full conditional expectation does not need to be calculated in one step, because ''τ'' and '''μ'''/'''Σ''' appear in separate linear terms and can thus be maximized independently.
  −
  −
This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.
  −
  −
这个完整的条件期望不需要在一个步骤中计算,因为和 / 以单独的线性项出现,因此可以独立地最大化。
  −
  −
  −
  −
==== M step ====
  −
  −
''Q''(''θ''&nbsp;|&nbsp;''θ''<sup>(''t'')</sup>) being quadratic in form means that determining the maximizing values of ''θ'' is relatively straightforward. Also, ''τ'', ('''μ'''<sub>1</sub>,''Σ''<sub>1</sub>) and ('''μ'''<sub>2</sub>,''Σ''<sub>2</sub>) may all be maximized independently since they all appear in separate linear terms.
  −
  −
Q(θ&nbsp;|&nbsp;θ<sup>(t)</sup>) being quadratic in form means that determining the maximizing values of θ is relatively straightforward. Also, τ, (μ<sub>1</sub>,Σ<sub>1</sub>) and (μ<sub>2</sub>,Σ<sub>2</sub>) may all be maximized independently since they all appear in separate linear terms.
  −
  −
Q (| sup (t) / sup)是二次型,这意味着确定最大值是相对简单的。此外,(sub 1 / sub,sub 1 / sub)和(sub 2 / sub,sub 2 / sub)都可以独立地最大化,因为它们都是以独立的线性形式出现的。
  −
  −
  −
  −
To begin, consider ''τ'', which has the constraint ''τ''<sub>1</sub> + ''τ''<sub>2</sub>=1:
  −
  −
To begin, consider τ, which has the constraint τ<sub>1</sub> + τ<sub>2</sub>=1:
  −
  −
首先,考虑一下,约束子1 / sub + sub 2 / sub 1:
  −
  −
:<math>\begin{align}\boldsymbol{\tau}^{(t+1)}
  −
  −
<math>\begin{align}\boldsymbol{\tau}^{(t+1)}
  −
  −
Math  begin { align } boldsymbol { tau } ^ {(t + 1)}
  −
  −
&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}}\  Q(\theta \mid \theta^{(t)} ) \\
  −
  −
&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}}\  Q(\theta \mid \theta^{(t)} ) \\
  −
  −
下集(黑体符号{ tau }操作者名称{ arg,max } q ( theta mid  theta ^ (t)})
  −
  −
&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}} \ \left\{ \left[  \sum_{i=1}^n T_{1,i}^{(t)} \right] \log \tau_1 + \left[  \sum_{i=1}^n T_{2,i}^{(t)} \right] \log \tau_2  \right\}
  −
  −
&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}} \ \left\{ \left[  \sum_{i=1}^n T_{1,i}^{(t)} \right] \log \tau_1 + \left[  \sum_{i=1}^n T_{2,i}^{(t)} \right] \log \tau_2  \right\}
  −
  −
下定义符号{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}{{{{{{{{{{{{{{{{{}}
  −
  −
\end{align}</math>
  −
  −
\end{align}</math>
  −
  −
End { align } / math
  −
  −
This has the same form as the MLE for the [[binomial distribution]], so
  −
  −
This has the same form as the MLE for the binomial distribution, so
  −
  −
这与二项分布的 MLE 形式相同,所以
  −
  −
:<math>\tau^{(t+1)}_j = \frac{\sum_{i=1}^n T_{j,i}^{(t)}}{\sum_{i=1}^n (T_{1,i}^{(t)} + T_{2,i}^{(t)} ) } = \frac{1}{n} \sum_{i=1}^n T_{j,i}^{(t)}</math>.
  −
  −
<math>\tau^{(t+1)}_j = \frac{\sum_{i=1}^n T_{j,i}^{(t)}}{\sum_{i=1}^n (T_{1,i}^{(t)} + T_{2,i}^{(t)} ) } = \frac{1}{n} \sum_{i=1}^n T_{j,i}^{(t)}</math>.
  −
  −
数学 ^ {(t + 1)} j  frac { sum { i } ^ n t { j,i }{(t)}}{{ i } ^ n (t {1,i } ^ {(t)} + t {2,i } ^ {(t)})} frac {1}{ n }{ i }{ n }{ t { j,i }{(t)}}} / math。
  −
  −
  −
  −
For the next estimates of ('''μ'''<sub>1</sub>,Σ<sub>1</sub>):
  −
  −
For the next estimates of (μ<sub>1</sub>,Σ<sub>1</sub>):
  −
  −
下一个估计数(小于1 / 小于1 / 小于) :
  −
  −
:<math>\begin{align}(\boldsymbol{\mu}_1^{(t+1)},\Sigma_1^{(t+1)})
  −
  −
<math>\begin{align}(\boldsymbol{\mu}_1^{(t+1)},\Sigma_1^{(t+1)})
  −
  −
数学开始{ align }( boldsymbol {1 ^ {(t + 1)} , Sigma 1 ^ {(t + 1)})
  −
  −
&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} Q(\theta \mid \theta^{(t)} ) \\
  −
  −
&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} Q(\theta \mid \theta^{(t)} ) \\
  −
  −
& 下设符号{ mu }1, Sigma 1}运算符{ arg,max } q ( theta mid theta ^ (t)})
  −
  −
&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} \sum_{i=1}^n T_{1,i}^{(t)} \left\{ -\tfrac{1}{2} \log |\Sigma_1| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_1)^\top\Sigma_1^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_1) \right\}
  −
  −
&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} \sum_{i=1}^n T_{1,i}^{(t)} \left\{ -\tfrac{1}{2} \log |\Sigma_1| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_1)^\top\Sigma_1^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_1) \right\}
  −
  −
下设符号1,Sigma 1操作名称{ arg,max }和{ i } ^ n t {1,i } ^ (t)}左-对数框架1 |-对数框架1 |-对数框架符号(1) ^-对数框架符号(1) ^-1
  −
  −
\end{align}</math>.
  −
  −
\end{align}</math>.
  −
  −
End { align } / math.
  −
  −
This has the same form as a weighted MLE for a normal distribution, so
  −
  −
This has the same form as a weighted MLE for a normal distribution, so
  −
  −
这与正态分布的加权极大似然估计形式相同,因此
  −
  −
:<math>\boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}} </math> and <math>\Sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} </math>
  −
  −
<math>\boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}} </math> and <math>\Sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} </math>
  −
  −
数学符号{1 ^ {(t + 1)} frac {1} ^ n t {1,i }{(t)} mathbf {1} ^ n t {1,i } ^ {(t)} / math 1 Sigma {(t + 1)}} frac {1} ^ n t {1,i }{(t)}} ( mathbf { x } i- boldsymbol { mu }1 ^ {(t + 1)) {(t + 1)})( mathbf { x } i- boldsymbol { mu }1 ^ {(t + 1)}){(t + 1)}) ^  top }{ i } ^ n t {1,i }{(t)}}} / math
  −
  −
and, by symmetry
  −
  −
and, by symmetry
  −
  −
并且,通过对称
  −
  −
:<math>\boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}} </math> and <math>\Sigma_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)})^\top }{\sum_{i=1}^n T_{2,i}^{(t)}} </math>.
  −
  −
<math>\boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}} </math> and <math>\Sigma_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)})^\top }{\sum_{i=1}^n T_{2,i}^{(t)}} </math>.
  −
  −
数学符号{2 ^ {(t + 1)} frac {1} ^ n t {2,i }{(t)} mathbf { i }{1} ^ n t {2,i } ^ (t)} / math 2 Sigma {(t + 1)}}} frac {1}{ t {2,i }{(t)}}} ( mathbf { x } i- boldsymbol { mu }2 ^ {(t + 1)) {(t + 1)})( mathbf { x } i- boldsymbol { mu }2 ^ {(t + 1)}{(t + 1)}) ^  top }{{ i 1} ^ n t {2,i }{(t)}}} / math.
  −
  −
  −
  −
==== Termination ====
  −
  −
  −
  −
Conclude the iterative process if <math> E_{Z\mid\theta^{(t)},\mathbf{x}}[\log L(\theta^{(t)};\mathbf{x},\mathbf{Z})] \leq E_{Z\mid\theta^{(t-1)},\mathbf{x}}[\log L(\theta^{(t-1)};\mathbf{x},\mathbf{Z})] + \varepsilon</math> for <math> \varepsilon </math> below some preset threshold.
  −
  −
Conclude the iterative process if <math> E_{Z\mid\theta^{(t)},\mathbf{x}}[\log L(\theta^{(t)};\mathbf{x},\mathbf{Z})] \leq E_{Z\mid\theta^{(t-1)},\mathbf{x}}[\log L(\theta^{(t-1)};\mathbf{x},\mathbf{Z})] + \varepsilon</math> for <math> \varepsilon </math> below some preset threshold.
  −
  −
结束迭代过程如果 math e { z  mid  theta ^ (t)} ,mathbf { x }[ log l ( theta ^ (t)} ; mathbf { x } ,mathbf { z })] leq e { z  mid  theta ^ (t-1)} ,mathbf { log l (theta ^ ((t-1)})} ; mathx } ,z }] + varepsilon / silon / math for math for math  vareareppon / silon / math 在某个预设阈值以下。
  −
  −
  −
  −
==== Generalization ====
  −
  −
  −
  −
The algorithm illustrated above can be generalized for mixtures of more than two [[multivariate normal distribution]]s.
  −
  −
The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.
  −
  −
上述算法可以推广到多于两个多元正态分布的混合情况。
  −
  −
  −
  −
===Truncated and censored regression===
  −
  −
  −
  −
The EM algorithm has been implemented in the case where an underlying [[linear regression]] model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.<ref name=Wolynetz>{{cite journal |title= Maximum likelihood estimation in a linear model from confined and censored normal data |last1= Wolynetz |first1= M.S. |journal= [[Journal of the Royal Statistical Society, Series C]] |year= 1979 |volume= 28 |issue= 2 |pages= 195–206 |doi= 10.2307/2346749|jstor= 2346749 }}</ref>  Special cases of this model include censored or truncated observations from one [[normal distribution]].<ref name=Wolynetz/>
  −
  −
The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.  Special cases of this model include censored or truncated observations from one normal distribution.
  −
  −
算法已经实现在这样的情况下,一个基础的线性回归 / 模型存在解释一些数量的变化,但实际观察到的数值是审查或截断版本的那些表示在模型中。这个模型的特殊情况包括截尾或从一个正态分布截断的观测值。
  −
  −
  −
  −
== Alternatives ==
  −
  −
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed ''moment-based approaches''<ref>{{Cite journal|last=Pearson|first=Karl|date=1894|title=Contributions to the Mathematical Theory of Evolution|journal=Philosophical Transactions of the Royal Society of London A|volume=185|pages=71–110|issn=0264-3820|jstor=90667|doi=10.1098/rsta.1894.0003|bibcode=1894RSPTA.185...71P|doi-access=free}}</ref> or the so-called ''spectral techniques''<ref>{{Cite journal|last=Shaban|first=Amirreza|last2=Mehrdad|first2=Farajtabar|last3=Bo|first3=Xie|last4=Le|first4=Song|last5=Byron|first5=Boots|date=2015|title=Learning Latent Variable Models by Improving Spectral Solutions with Exterior Point Method|url=https://www.cc.gatech.edu/~bboots3/files/SpectralExteriorPoint-NIPSWorkshop.pdf|journal=UAI|volume=|pages=792–801|via=}}</ref><ref>{{Cite book|title=Local Loss Optimization in Operator Models: A New Insight into Spectral Learning|last=Balle, Borja Quattoni, Ariadna Carreras, Xavier|date=2012-06-27|oclc=815865081}}</ref>{{Citation needed|date=April 2019}}. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions{{Citation needed|date=April 2019}}.
  −
  −
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
  −
  −
Em 通常收敛到一个局部最优解,而不一定是全局最优解,收敛速度一般没有界限。这是可能的,它可以任意贫穷的高维和可能有一个指数数目的局部最优化。因此,需要其他方法来保证学习,特别是在高维度的设置。有更好的一致性保证的 EM 替代方法存在,这被称为基于时刻的方法或所谓的光谱技术。基于矩的概率模型参数学习方法近年来受到越来越多的关注,因为它们在一定条件下具有全局收敛性等保证。具有学习保证的算法可以推导出许多重要模型,如混合模型、隐马尔可夫模型等。对于这些谱方法,不会出现伪局部最优,并且在一定的正则性条件下可以一致地估计真实参数。
  −
  −
  −
  −
== See also ==
  −
  −
* [[mixture distribution]]
  −
  −
* [[compound distribution]]
  −
  −
* [[density estimation]]
  −
  −
* [[total absorption spectroscopy]]
  −
  −
* The EM algorithm can be viewed as a special case of the [[MM algorithm|majorize-minimization (MM) algorithm]].<ref>{{cite web|last=Lange|first=Kenneth|title=The MM Algorithm|url=http://www.stat.berkeley.edu/~aldous/Colloq/lange-talk.pdf}}</ref>
  −
  −
  −
  −
==References==
  −
  −
{{reflist}}
  −
  −
  −
  −
== Further reading ==
  −
  −
* {{cite book |first=Robert |last=Hogg |first2=Joseph |last2=McKean |authorlink3=Allen Craig |first3=Allen |last3=Craig |title=Introduction to Mathematical Statistics |pages=359–364 |location=Upper Saddle River, NJ |publisher=Pearson Prentice Hall |year=2005 }}
  −
  −
* {{cite journal |citeseerx= 10.1.1.9.9735 |title= The Expectation Maximization Algorithm|first=Frank |last= Dellaert |author-link= Frank Dellaert |year= 2002}} gives an easier explanation of EM algorithm as to lowerbound maximization.
  −
  −
* {{cite book
  −
  −
|last1 = Bishop |first1 = Christopher M.
  −
  −
|last1 = Bishop |first1 = Christopher M.
  −
  −
1. 克里斯托弗 m。
  −
  −
|author-link = Christopher Bishop
  −
  −
|author-link = Christopher Bishop
  −
  −
作者链接 Christopher Bishop
  −
  −
|title = Pattern Recognition and Machine Learning
  −
  −
|title = Pattern Recognition and Machine Learning
  −
  −
模式识别与机器学习
  −
  −
|year = 2006
  −
  −
|year = 2006
  −
  −
2006年
  −
  −
|publisher = Springer
  −
  −
|publisher = Springer
  −
  −
出版商斯普林格
  −
  −
|ref = CITEREFBishop2006
  −
  −
|ref = CITEREFBishop2006
  −
  −
2006
  −
  −
|isbn = 978-0-387-31073-2
  −
  −
|isbn = 978-0-387-31073-2
  −
  −
[国际标准图书编号978-0-387-31073-2]
  −
  −
}}
  −
  −
}}
  −
  −
}}
  −
  −
* {{cite journal |doi=10.1561/2000000034 |title=Theory and Use of the EM Algorithm |journal=Foundations and Trends in Signal Processing |volume=4 |issue=3 |pages=223–296 |first1=M. R. |last1=Gupta |first2=Y. |last2=Chen |year=2010 |citeseerx=10.1.1.219.6830 }} A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet.
  −
  −
* {{cite journal |citeseerx= 10.1.1.28.613 |title= A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models|first=Jeff |last= Bilmes |year= 1998}} includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models.
  −
  −
* {{cite book |first=Geoffrey J. |last=McLachlan |first2=Thriyambakam |last2=Krishnan |title=The EM Algorithm and Extensions |location=Hoboken |publisher=Wiley |edition=2nd |year=2008 |isbn=978-0-471-20170-0 }}
  −
  −
  −
  −
== External links ==
  −
  −
* Various 1D, 2D and 3D [http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_2D_PointSegmentation_EM_Mixture demonstrations of EM together with Mixture Modeling] are provided as part of the paired [[SOCR]] activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings.
  −
  −
* [https://arxiv.org/abs/1203.5181 k-MLE: A fast algorithm for learning statistical mixture models]
  −
  −
* [https://github.com/l-/CommonDataAnalysis Class hierarchy in [[C++]] (GPL) including Gaussian Mixtures]
  −
  −
* [http://www.inference.phy.cam.ac.uk/mackay/itila/ The on-line textbook: Information Theory, Inference, and Learning Algorithms], by [[David J.C. MacKay]] includes simple examples of the EM algorithm such as clustering using the soft ''k''-means algorithm, and emphasizes the variational view of the EM algorithm, as described in Chapter 33.7 of version 7.2 (fourth edition).
  −
  −
* [http://www.cse.buffalo.edu/faculty/mbeal/papers/beal03.pdf Variational Algorithms for Approximate Bayesian Inference], by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs  ([http://www.cse.buffalo.edu/faculty/mbeal/thesis/index.html chapters]).
  −
  −
* [http://www.seanborman.com/publications/EM_algorithm.pdf The Expectation Maximization Algorithm: A short tutorial], A self-contained derivation of the EM Algorithm by Sean Borman.
  −
  −
* [http://pages.cs.wisc.edu/~jerryzhu/cs838/EM.pdf The EM Algorithm], by Xiaojin Zhu.
  −
  −
* [https://arxiv.org/pdf/1105.1476.pdf EM algorithm and variants: an informal tutorial] by Alexis Roche.  A concise and very clear description of EM and many interesting variants.
  −
  −
  −
  −
{{DEFAULTSORT:Expectation-maximization Algorithm}}
  −
  −
[[Category:Estimation methods]]
      
Category:Estimation methods
 
Category:Estimation methods
第1,973行: 第697行:  
类别: 估算方法
 
类别: 估算方法
   −
[[Category:Machine learning algorithms]]
+
: <math>\widehat{\sigma}^2_w =  \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat{{x}}_k)}^2,</math>
    
Category:Machine learning algorithms
 
Category:Machine learning algorithms
第1,979行: 第703行:  
类别: 机器学习算法
 
类别: 机器学习算法
   −
[[Category:Missing data]]
+
 
    
Category:Missing data
 
Category:Missing data
第1,985行: 第709行:  
类别: 缺失数据
 
类别: 缺失数据
   −
[[Category:Statistical algorithms]]
+
where <math>\widehat{x}_k</math> and <math>\widehat{x}_{k+1}</math> are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
    
Category:Statistical algorithms
 
Category:Statistical algorithms
第1,991行: 第715行:  
类别: 统计算法
 
类别: 统计算法
   −
[[Category:Optimization algorithms and methods]]
+
: <math>\widehat{F} = \frac{\sum_{k=1}^N (\widehat{x}_{k+1}-\widehat{F} \widehat{x}_k)}{\sum_{k=1}^N \widehat{x}_k^2}.</math>
    
Category:Optimization algorithms and methods
 
Category:Optimization algorithms and methods
第1,997行: 第721行:  
类别: 优化算法和方法
 
类别: 优化算法和方法
   −
[[Category:Cluster analysis algorithms]]
+
 
    
Category:Cluster analysis algorithms
 
Category:Cluster analysis algorithms
1,568

个编辑