更改

跳到导航 跳到搜索
删除8,619字节 、 2021年12月28日 (二) 19:42
无编辑摘要
第1行: 第1行: −
本词条由Xugami初步翻译<br>
+
{{#seo:
 
+
|keywords=统计学,期望最大化算法,统计模型
此词条暂由彩云小译翻译,翻译字数共1877,未经人工整理和审校,带来阅读不便,请见谅。
+
|description=是一种寻找统计模型中(局部)极大似然或者最大后验参数估计的迭代方法
 +
}}
    
在统计学中,'''期望最大化算法 expectation–maximization algorithm(EM algorithm)'''是一种寻找统计模型中(局部)极大似然或者最大后验 maximum a posteriori(MAP)参数估计的迭代方法,其中的统计模型依赖于未观测到的潜在变量。EM迭代过程中交替执行期望(E)步和最大化(M)步;前者使用当前参数估计值建立对数似然函数的期望函数,后者计算能够最大化E步中获得的期望对数似然函数的参数。这些参数估计值将用于确定下一个E步中潜在变量的分布。
 
在统计学中,'''期望最大化算法 expectation–maximization algorithm(EM algorithm)'''是一种寻找统计模型中(局部)极大似然或者最大后验 maximum a posteriori(MAP)参数估计的迭代方法,其中的统计模型依赖于未观测到的潜在变量。EM迭代过程中交替执行期望(E)步和最大化(M)步;前者使用当前参数估计值建立对数似然函数的期望函数,后者计算能够最大化E步中获得的期望对数似然函数的参数。这些参数估计值将用于确定下一个E步中潜在变量的分布。
第156行: 第157行:  
== 作为最大化-最大化过程 ==
 
== 作为最大化-最大化过程 ==
   −
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent. Consider the function:
+
EM算法可以看作是两个交替的最大化步骤,即作为坐标下降法的一个例子。<ref name="neal1999">{{cite book|last1=Neal |first=Radford |last2=Hinton |first2=Geoffrey |author-link2=Geoffrey Hinton |title=A view of the EM algorithm that justifies incremental, sparse, and other variants |journal=Learning in Graphical Models |editor=Michael I. Jordan |editor-link=Michael I. Jordan |pages= 355–368 |publisher= MIT Press |location=Cambridge, MA |year=1999 |isbn=978-0-262-60032-3 |url=http://ftp.cs.toronto.edu/pub/radford/emk.pdf |access-date=2009-03-22}}</ref><ref name="hastie2001">{{cite book|last1=Hastie |first1= Trevor|author-link1=Trevor Hastie|last2=Tibshirani|first2=Robert|author-link2=Robert Tibshirani|last3=Friedman|first3=Jerome |year=2001 |title=The Elements of Statistical Learning |url=https://archive.org/details/elementsstatisti00thas_842 |url-access=limited |isbn=978-0-387-95284-0 |publisher=Springer |location=New York |chapter=8.5 The EM algorithm |pages=[https://archive.org/details/elementsstatisti00thas_842/page/n237 236]–243}}</ref>考虑函数:
   −
EM算法可以看作是两个交替的最大化步骤,即作为坐标下降法的一个例子。 考虑函数:
+
:<math> F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), </math>
   −
[math]\displaystyle{ F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), }[/math]
     −
where ''q'' is an arbitrary probability distribution over the unobserved data ''z'' and ''H(q)'' is the entropy of the distribution ''q''. This function can be written as
+
''q''是未观测数据 ''z''上的任意概率分布,''H(q)''是分布''q''的熵。 这个函数可以写成
   −
q 是未观测数据 z 上的任意概率分布,H(q) 是分布 q 的熵。 这个函数可以写成
+
:<math> F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), </math>
   −
<nowiki>[math]\displaystyle{ F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), }[/math]</nowiki>
     −
where [math]\displaystyle{ p_{Z\mid X}(\cdot\mid x;\theta ) }[/math] is the conditional distribution of the unobserved data given the observed data [math]\displaystyle{ x }[/math] and [math]\displaystyle{ D_{KL} }[/math] is the Kullback–Leibler divergence.
+
其中<math>p_{Z\mid X}(\cdot\mid x;\theta )</math>是在给定观察数据<math>x</math>前提下未观察到数据的条件分布,<math>D_{KL}</math>是 Kullback–Leibler 散度。那么EM算法的步骤可以看成:
   −
<nowiki>其中 {\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )}{\displaystyle p_{Z\mid X}(\cdot \mid x;\theta )} 是在给定观察数据{\displaystyle x}x前提下未观察到数据的条件分布, {\displaystyle D_{KL}}{\displaystyle D_{KL}} 是 Kullback–Leibler 散度。</nowiki>Then the steps in the EM algorithm may be viewed as:
     −
那么EM算法的步骤可以看成:
+
''期望步 Expectation step'':选择<math>q</math>最大化<math>F</math>:
   −
''Expectation step'': Choose [math]\displaystyle{ q }[/math] to maximize [math]\displaystyle{ F }[/math]:
+
::<math> q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) </math>
   −
期望步:选择q最大化F:
     −
[math]\displaystyle{ q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) }[/math]
+
''最大化步 Maximization step'': 选择<math>\theta</math>最大化<math>F</math>:
   −
''Maximization step'': Choose [math]\displaystyle{ \theta }[/math] to maximize [math]\displaystyle{ F }[/math]:
     −
最大化步:选择θ最大化F
  −
:
   
== 应用 ==
 
== 应用 ==
   −
EM is frequently used for [[data clustering]] in [[machine learning]] and [[computer vision]]. In [[natural language processing]], two prominent instances of the algorithm are the [[Baum–Welch algorithm]] for [[hidden Markov models]], and the [[inside-outside algorithm]] for unsupervised induction of [[probabilistic context-free grammar]]s.
+
EM 经常用于[[机器学习]]和计算机视觉中的数据聚类。 在[[自然语言处理]]中,该算法的两个突出实例是用于[[隐马尔可夫模型]]的 Baum-Welch 算法和用于无监督概率上下文无关文法归纳的内-外算法。EM 经常用于混合模型的参数估计,<ref>{{cite journal |doi=10.1080/01621459.1988.10478693 |title=Newton—Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data |journal=Journal of the American Statistical Association |volume=83 |issue=404 |pages=1014 |year=1988 |last1=Lindstrom |first1=Mary J |last2=Bates |first2=Douglas M }}</ref><ref>{{cite journal |doi=10.2307/1390614 |jstor=1390614 |title=Fitting Mixed-Effects Models Using Efficient EM-Type Algorithms |journal=Journal of Computational and Graphical Statistics |volume=9 |issue=1 |pages=78–98 |year=2000 |last1=Van Dyk |first1=David A }}</ref>特别是在数量遗传学中。<ref>{{cite journal |doi=10.1111/anzs.12208 |title=A new REML (parameter expanded) EM algorithm for linear mixed models |journal=Australian & New Zealand Journal of Statistics |volume=59 |issue=4 |pages=433 |year=2017 |last1=Diffey |first1=S. M |last2=Smith |first2=A. B |last3=Welsh |first3=A. H |last4=Cullis |first4=B. R |doi-access=free }}</ref>
   −
EM is frequently used for parameter estimation of [[mixed model]]s, notably in [[quantitative genetics]].
     −
In [[psychometrics]], EM is almost indispensable for estimating item parameters and latent abilities of [[item response theory]] models.
+
在心理测量学中,EM对于估计项目响应理论模型的项目参数和潜在能力几乎是必不可少的。
   −
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.
     −
The EM algorithm (and its faster variant [[ordered subset expectation maximization]]) is also widely used in [[medical imaging|medical image]] reconstruction, especially in [[positron emission tomography]], [[single photon emission computed tomography]], and x-ray [[computed tomography]]. See below for other faster variants of EM.
+
由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的对投资组合进行定价和管理风险的工具。
   −
In [[structural engineering]], the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see [[Operational Modal Analysis]]).
     −
EM 经常用于机器学习和计算机视觉中的数据聚类。 在自然语言处理中,该算法的两个突出实例是用于隐马尔可夫模型的 Baum-Welch 算法和用于无监督概率上下文无关文法归纳的内-外算法。EM 经常用于混合模型的参数估计,特别是在数量遗传学中。
+
EM 算法(及其更快变体的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射断层扫描、单光子发射计算机断层扫描和X射线计算机断层扫描中。下面是 EM 的其他更快的变体。
   −
在心理测量学中,EM对于估计项目响应理论模型的项目参数和潜在能力几乎是必不可少的。
  −
  −
由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的对投资组合进行定价和管理风险的工具。
     −
EM 算法(及其更快变体的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射断层扫描、单光子发射计算机断层扫描和X射线计算机断层扫描中。下面是 EM 的其他更快的变体。
+
在结构工程中,利用期望最大化(STRIDE)<ref>Matarazzo, T. J., and Pakzad, S. N. (2016). “STRIDE for Structural Identification using Expectation Maximization: Iterative Output-Only Method for Modal Identification.” Journal of Engineering Mechanics.http://ascelibrary.org/doi/abs/10.1061/(ASCE)EM.1943-7889.0000951</ref>算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)
   −
在结构工程中,利用期望最大化(STRIDE)算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)。
      
== 滤波和平滑EM算法 ==
 
== 滤波和平滑EM算法 ==
  −
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
  −
   
卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批状态估计。然而,这些最小方差解需要状态空间模型参数的估计。EM 算法可用于求解联合状态和参数估计问题。
 
卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批状态估计。然而,这些最小方差解需要状态空间模型参数的估计。EM 算法可用于求解联合状态和参数估计问题。
   −
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
      
滤波和平滑 EM 算法是通过重复这两个步骤产生的:
 
滤波和平滑 EM 算法是通过重复这两个步骤产生的:
   −
E-step
     −
Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
+
'''E步'''
   −
E步
+
:操作一个 Kalman 滤波器或一个最小方差平滑设计与当前的参数估计,以获得更新的状态估计。
   −
操作一个 Kalman 滤波器或一个最小方差平滑设计与当前的参数估计,以获得更新的状态估计。
     −
M-step
+
'''M步'''
   −
Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
+
:使用最大似然计算中的滤波或平滑状态估计来获得更新的参数估计。
   −
M步
     −
使用最大似然计算中的滤波或平滑状态估计来获得更新的参数估计。
+
假设卡尔曼滤波器或最小方差平滑器对具有附加白噪声的单输入单输出系统进行测量。通过极大似然估计,可以得到更新的测量噪声方差估计
   −
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
+
: <math>\widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2,</math>
   −
[math]\displaystyle{ \widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2, }[/math]
     −
假设卡尔曼滤波器或最小方差平滑器对具有附加白噪声的单输入单输出系统进行测量。通过极大似然估计,可以得到更新的测量噪声方差估计
+
其中<math>\widehat{x}_k</math>是由滤波器或平滑器从 N 个标量测量<math>z_k</math>。 上述更新也可以应用于更新泊松测量噪声强度。 类似地,对于一阶自回归过程,更新的过程噪声方差估计可以计算为</nowiki>
   −
where [math]\displaystyle{ \widehat{x}_k }[/math] are scalar output estimates calculated by a filter or a smoother from N scalar measurements [math]\displaystyle{ z_k }[/math]. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
+
: <math>\widehat{\sigma}^2_w =  \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat{{x}}_k)}^2,</math>
   −
<nowiki>其中 {\displaystyle {\widehat {x}}_{k}}{\displaystyle {\widehat {x}}_{k}} 是由滤波器或平滑器从 N 个标量测量 {\displaystyle z_ {k}}z_{k}。 上述更新也可以应用于更新泊松测量噪声强度。 类似地,对于一阶自回归过程,更新的过程噪声方差估计可以计算为</nowiki>
     −
<nowiki>{\displaystyle {\widehat {\sigma }}_{w}^{2}={\frac {1}{N}}\sum _{k=1}^{N}{({\widehat {x} }_{k+1}-{\widehat {F}}{\widehat {x}}_{k})}^{2},}{\displaystyle {\widehat {\sigma }}_{w}^ {2}={\frac {1}{N}}\sum _{k=1}^{N}{({\widehat {x}}_{k+1}-{\widehat {F}}{ \widehat {x}}_{k})}^{2},}</nowiki>
+
其中 <math>\widehat{x}_k</math> 和 <math>\widehat{x}_{k+1}</math>是由过滤器或平滑器计算的标量状态估计。 更新后的模型系数估计是通过</nowiki>
   −
where [math]\displaystyle{ \widehat{x}_k }[/math] and [math]\displaystyle{ \widehat{x}_{k+1} }[/math] are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
+
: <math>\widehat{F} = \frac{\sum_{k=1}^N (\widehat{x}_{k+1}-\widehat{F} \widehat{x}_k)}{\sum_{k=1}^N \widehat{x}_k^2}.</math>
   −
<nowiki>其中 {\displaystyle {\widehat {x}}_{k}}{\displaystyle {\widehat {x}}_{k}} 和 {\displaystyle {\widehat {x}}_{k+1}}{ \displaystyle {\widehat {x}}_{k+1}} 是由过滤器或平滑器计算的标量状态估计。 更新后的模型系数估计是通过</nowiki>
     −
<nowiki>{\displaystyle {\widehat {F}}={\frac {\sum _{k=1}^{N}({\widehat {x}}_{k+1}-{\widehat {F}}{ \widehat {x}}_{k})}{\sum _{k=1}^{N}{\widehat {x}}_{k}^{2}}}.}{\displaystyle {\widehat {F}}={\frac {\sum _{k=1}^{N}({\widehat {x}}_{k+1}-{\widehat {F}}{\widehat {x}} _{k})}{\sum _{k=1}^{N}{\widehat {x}}_{k}^{2}}}</nowiki>
+
研究了上述参数估计的收敛性。<ref>{{Cite journal
 +
|last1 = Einicke
 +
|first1 =  G. A.
 +
|last2 = Malos
 +
|first2 = J. T.
 +
|last3 = Reid
 +
|first3 =  D. C.
 +
|last4 = Hainsworth
 +
|first4 = D. W.
 +
|title = Riccati Equation and EM Algorithm Convergence for Inertial Navigation Alignment
 +
|journal = IEEE Trans. Signal Process.
 +
|volume = 57
 +
|issue = 1
 +
|pages = 370–375
 +
|date=January 2009
 +
|doi = 10.1109/TSP.2008.2007090
 +
|bibcode = 2009ITSP...57..370E
 +
|s2cid = 1930004
 +
}}</ref><ref>{{Cite journal
 +
|last1 = Einicke
 +
|first1 =  G. A.
 +
|last2 = Falco
 +
|first2 =  G.
 +
|last3 = Malos
 +
|first3 = J. T.
 +
|title = EM Algorithm State Matrix Estimation for Navigation
 +
|journal = IEEE Signal Processing Letters
 +
|volume = 17
 +
|issue = 5
 +
|pages = 437–440
 +
|date=May 2010
 +
|doi = 10.1109/LSP.2010.2043151
 +
|bibcode = 2010ISPL...17..437E |s2cid = 14114266
 +
}}</ref><ref>{{Cite journal
 +
|last1 = Einicke
 +
|first1 =  G. A.
 +
|last2 = Falco
 +
|first2 =  G.
 +
|last3 = Dunn
 +
|first3 = M. T.
 +
|last4 = Reid
 +
|first4 = D. C.
 +
|title = Iterative Smoother-Based Variance Estimation
 +
|journal = IEEE Signal Processing Letters
 +
|volume = 19
 +
|issue = 5
 +
|pages = 275–278
 +
|date=May 2012
 +
|bibcode = 2012ISPL...19..275E
 +
|doi = 10.1109/LSP.2012.2190278
 +
|s2cid = 17476971
 +
}}</ref><ref>{{Cite journal
 +
|last = Einicke
 +
|first =  G. A.
 +
|title = Iterative Filtering and Smoothing of Measurements Possessing Poisson Noise
 +
|journal = IEEE Transactions on Aerospace and Electronic Systems
 +
|volume = 51
 +
|issue = 3
 +
|pages = 2205–2011
 +
|date=Sep 2015
 +
|doi = 10.1109/TAES.2015.140843
 +
|bibcode = 2015ITAES..51.2205E
 +
|s2cid = 32667132
 +
}}</ref>
   −
The convergence of parameter estimates such as those above are well studied.
     −
研究了上述参数估计的收敛性。
+
== 变体 ==
   −
== 变体 ==
+
针对有时EM 算法收敛速度慢的问题,一些改进方法被提出,如共轭梯度法和修正牛顿法(Newton-Raphson)。<ref>{{cite journal |first1= Mortaza |last1=Jamshidian |first2=Robert I. |last2=Jennrich|title=Acceleration of the EM Algorithm by using Quasi-Newton Methods |year=1997 |journal=[[Journal of the Royal Statistical Society, Series B]] |volume=59 |issue=2 |pages=569–587 |doi=10.1111/1467-9868.00083 |mr=1452026 }}</ref> 此外,EM 还可以与约束估计方法一起使用。
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods.
     −
针对有时EM 算法收敛速度慢的问题,一些改进方法被提出,如共轭梯度法和修正牛顿法(Newton-Raphson)。此外,EM 还可以与约束估计方法一起使用。
     −
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "using a 'covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".
+
''参数扩展期望最大化  Parameter-expanded expectation maximization (PX-EM)''算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。<ref>{{cite journal |doi=10.1093/biomet/85.4.755 |title=Parameter expansion to accelerate EM: The PX-EM algorithm |journal=Biometrika |volume=85 |issue=4 |pages=755–770 |year=1998 |last1=Liu |first1=C |citeseerx=10.1.1.134.9617 }}</ref>
   −
参数扩展期望最大化(PX-EM)算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。
     −
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θ<sub>i</sub> is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.
+
''期望条件最大化 Expectation conditional maximization (ECM)''用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数''θ''<sub>''i''</sub>单独最大化,条件是其他参数保持不变。<ref>{{cite journal|last1=Meng  |first1= Xiao-Li |last2=Rubin |first2=Donald B. |s2cid= 40571416 |author-link2=Donald Rubin |title=Maximum likelihood estimation via the ECM algorithm: A general framework |year=1993 |journal=[[Biometrika]] |volume=80 |issue=2 |pages=267–278 |doi=10.1093/biomet/80.2.267 |mr=1243503}}</ref> 本身也可以扩展为期望条件最大化 Expectation conditional maximization either(ECME)算法。<ref>{{cite journal |doi= 10.1093/biomet/81.4.633|jstor=2337067 |title=The ECME Algorithm: A Simple Extension of EM and ECM with Faster Monotone Convergence |journal=Biometrika |volume=81 |issue=4 |pages=633 |year=1994 |last1=Liu |first1=Chuanhai |last2=Rubin |first2=Donald B }}</ref>
   −
期望条件最大化(ECM)用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数 θ < sub > i 单独最大化,条件是其他参数保持不变。本身也可以扩展为期望条件最大化(ECME)算法。
     −
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.
+
也可以考虑将 EM 算法作为 MM (majorize/minize 或 Minorize/Maximize,取决于上下文)算法的子类,<ref>Hunter DR and Lange K (2004), [http://www.stat.psu.edu/~dhunter/papers/mmtutorial.pdf A Tutorial on MM Algorithms], The American Statistician, 58: 30–37</ref>因此可以使用在更一般情况下开发的任何机制。
   −
也可以考虑将 EM 算法作为 MM (majorize/minize 或 Minorize/Maximize,取决于上下文)算法的子类,因此可以使用在更一般情况下开发的任何机制。
      
'''α-EM算法'''
 
'''α-EM算法'''
   −
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
+
EM 算法中使用的 Q 函数基于对数似然。 因此,它被视为log-EM算法。 对数似然的使用可以推广到 α-对数似然比的使用。 然后,通过使用α-log似然比和α-散度的Q函数,可以将观测数据的α-log似然比精确表示为等式。 获得这个 Q 函数是一个广义的 E 步骤。 它的最大化是一个广义的 M 步。 这对称为 α-EM 算法,<ref>
 +
{{cite journal
 +
|last=Matsuyama |first=Yasuo
 +
|title=The α-EM algorithm: Surrogate likelihood maximization using α-logarithmic information measures
 +
|journal=IEEE Transactions on Information Theory
 +
|volume=49 | year=2003 |pages=692–706 |issue=3
 +
|doi=10.1109/TIT.2002.808105
 +
}}
 +
</ref>它包含 log-EM 算法作为其子类。 因此,Yasuo Matsuyama 的 α-EM 算法是 log-EM 算法的精确推广。 不需要计算梯度或 Hessian 矩阵。 通过选择合适的 α,α-EM 显示出比 log-EM 算法更快的收敛速度。 α-EM 算法导致了隐马尔可夫模型估计算法 α-HMM 的更快版本。<ref>
 +
{{cite journal
 +
|last=Matsuyama |first=Yasuo
 +
|title=Hidden Markov model estimation based on alpha-EM algorithm: Discrete and continuous alpha-HMMs
 +
|journal=International Joint Conference on Neural Networks
 +
| year=2011 |pages=808–816
 +
}}
 +
</ref>
   −
EM 算法中使用的 Q 函数基于对数似然。 因此,它被视为log-EM算法。 对数似然的使用可以推广到 α-对数似然比的使用。 然后,通过使用α-log似然比和α-散度的Q函数,可以将观测数据的α-log似然比精确表示为等式。 获得这个 Q 函数是一个广义的 E 步骤。 它的最大化是一个广义的 M 步。 这对称为 α-EM 算法,它包含 log-EM 算法作为其子类。 因此,Yasuo Matsuyama 的 α-EM 算法是 log-EM 算法的精确推广。 不需要计算梯度或 Hessian 矩阵。 通过选择合适的 α,α-EM 显示出比 log-EM 算法更快的收敛速度。 α-EM 算法导致了隐马尔可夫模型估计算法 α-HMM 的更快版本。
      
== 与变分贝叶斯方法的关系 ==
 
== 与变分贝叶斯方法的关系 ==
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
      
EM 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了一个关于潜在变量的概率分布估计(在贝叶斯风格)以及 θ 的点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本即给出一个关于θ 和潜在变量的概率分布可能是想要的。贝叶斯推理方法简单地将 θ 作为另一个潜变量来处理。在这个范例中,E 和 M 步骤之间的区别就消失了。如果使用上述因子化 Q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括 θ) 并每次优化一个。现在,每次迭代需要 k 个步骤,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 Q 只依赖于它的马尔可夫包层,所以局部信息传递可以用于有效的推理。
 
EM 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了一个关于潜在变量的概率分布估计(在贝叶斯风格)以及 θ 的点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本即给出一个关于θ 和潜在变量的概率分布可能是想要的。贝叶斯推理方法简单地将 θ 作为另一个潜变量来处理。在这个范例中,E 和 M 步骤之间的区别就消失了。如果使用上述因子化 Q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括 θ) 并每次优化一个。现在,每次迭代需要 k 个步骤,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 Q 只依赖于它的马尔可夫包层,所以局部信息传递可以用于有效的推理。
 +
    
== 几何解释 ==
 
== 几何解释 ==
In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
      
在信息几何中,E步和M步被解释为双重仿射连接下的投影,称为e-connection和m-connection; Kullback-Leibler 背离也可以用这些术语来理解。
 
在信息几何中,E步和M步被解释为双重仿射连接下的投影,称为e-connection和m-connection; Kullback-Leibler 背离也可以用这些术语来理解。
 +
    
== 例子 ==
 
== 例子 ==
'''<big>高斯混合</big>'''
+
<math>\mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n)</math> 成为一个样本 <math>n</math>来自维度的两个多元正态分布的混合的独立观察<math>d</math>,然后让<math>\mathbf{z} = (z_1,z_2,\ldots,z_n)</math>是确定观察来源的成分的潜在变量。<ref name="hastie2001"/>
   −
Let {\displaystyle \mathbf {x} =(\mathbf {x} _{1},\mathbf {x} _{2},\ldots ,\mathbf {x} _{n})}\mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n) be a sample of {\displaystyle n}n independent observations from a mixture of two multivariate normal distributions of dimension {\displaystyle d}d, and let {\displaystyle \mathbf {z} =(z_{1},z_{2},\ldots ,z_{n})}\mathbf{z} = (z_1,z_2,\ldots,z_n) be the latent variables that determine the component from which the observation originates.[16]
+
: <math>X_i \mid(Z_i = 1) \sim \mathcal{N}_d(\boldsymbol{\mu}_1,\Sigma_1)</math> and <math>X_i \mid(Z_i = 2) \sim \mathcal{N}_d(\boldsymbol{\mu}_2,\Sigma_2),</math>
   −
<nowiki>{\displaystyle X_{i}\mid (Z_{i}=1)\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }}_{1},\Sigma _{1})}{\displaystyle X_{i}\mid (Z_{i}=1)\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }}_{1},\Sigma _{1})} and {\displaystyle X_{i}\mid (Z_{i}=2)\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }}_{2},\Sigma _{2}),}{\displaystyle X_{i}\mid (Z_{i}=2)\sim {\mathcal {N}}_{d}({\boldsymbol {\mu }}_{2},\Sigma _{2}),}</nowiki>
+
其中
   −
where
+
: <math>\operatorname{P} (Z_i = 1 ) = \tau_1 \, </math> and <math>\operatorname{P} (Z_i=2) = \tau_2 = 1-\tau_1.</math>
   −
{\displaystyle \operatorname {P} (Z_{i}=1)=\tau _{1}\,}\operatorname{P} (Z_i = 1 ) = \tau_1 \,  and {\displaystyle \operatorname {P} (Z_{i}=2)=\tau _{2}=1-\tau _{1}.}{\displaystyle \operatorname {P} (Z_{i}=2)=\tau _{2}=1-\tau _{1}.}
+
目的是估计代表高斯分布之间混合值的未知参数以及每个的均值和协方差:
   −
The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:
+
: <math>\theta = \big( \boldsymbol{\tau},\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\Sigma_1,\Sigma_2 \big),</math>
   −
<nowiki>{\displaystyle \theta ={\big (}{\boldsymbol {\tau }},{\boldsymbol {\mu }}_{1},{\boldsymbol {\mu }}_{2},\Sigma _{1},\Sigma _{2}{\big )},}{\displaystyle \theta ={\big (}{\boldsymbol {\tau }},{\boldsymbol {\mu }}_{1},{\boldsymbol {\mu }}_{2},\Sigma _{1},\Sigma _{2}{\big )},}</nowiki>
     −
where the incomplete-data likelihood function is
+
其中不完全数据似然函数是
   −
<nowiki>{\displaystyle L(\theta ;\mathbf {x} )=\prod _{i=1}^{n}\sum _{j=1}^{2}\tau _{j}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j},\Sigma _{j}),}{\displaystyle L(\theta ;\mathbf {x} )=\prod _{i=1}^{n}\sum _{j=1}^{2}\tau _{j}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j},\Sigma _{j}),}</nowiki>
+
: <math>L(\theta;\mathbf{x}) = \prod_{i=1}^n \sum_{j=1}^2 \tau_j \ f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j),</math>
   −
and the complete-data likelihood function is
+
并且完全数据似然函数是
 +
: <math>L(\theta;\mathbf{x},\mathbf{z}) = p(\mathbf{x},\mathbf{z} \mid \theta) = \prod_{i=1}^n  \prod_{j=1}^2  \ [f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) \tau_j] ^{\mathbb{I}(z_i=j)},</math>
   −
<nowiki>{\displaystyle L(\theta ;\mathbf {x} ,\mathbf {z} )=p(\mathbf {x} ,\mathbf {z} \mid \theta )=\prod _{i=1}^{n}\prod _{j=1}^{2}\ [f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j},\Sigma _{j})\tau _{j}]^{\mathbb {I} (z_{i}=j)},}{\displaystyle L(\theta ;\mathbf {x} ,\mathbf {z} )=p(\mathbf {x} ,\mathbf {z} \mid \theta )=\prod _{i=1}^{n}\prod _{j=1}^{2}\ [f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j},\Sigma _{j})\tau _{j}]^{\mathbb {I} (z_{i}=j)},}</nowiki>
+
或者
   −
or
+
: <math>L(\theta;\mathbf{x},\mathbf{z}) = \exp \left\{ \sum_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] \right\},</math>
   −
<nowiki>{\displaystyle L(\theta ;\mathbf {x} ,\mathbf {z} )=\exp \left\{\sum _{i=1}^{n}\sum _{j=1}^{2}\mathbb {I} (z_{i}=j){\big [}\log \tau _{j}-{\tfrac {1}{2}}\log |\Sigma _{j}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})^{\top }\Sigma _{j}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})-{\tfrac {d}{2}}\log(2\pi ){\big ]}\right\},}{\displaystyle L(\theta ;\mathbf {x} ,\mathbf {z} )=\exp \left\{\sum _{i=1}^{n}\sum _{j=1}^{2}\mathbb {I} (z_{i}=j){\big [}\log \tau _{j}-{\tfrac {1}{2}}\log |\Sigma _{j}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})^{\top }\Sigma _{j}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})-{\tfrac {d}{2}}\log(2\pi ){\big ]}\right\},}</nowiki>
     −
where {\displaystyle \mathbb {I} }\mathbb {I}  is an indicator function and {\displaystyle f}f is the probability density function of a multivariate normal.
+
其中 <math>\mathbb{I}</math>是一个指示函数,并且<math>f</math>是多元正态的概率密度函数。
   −
In the last equality, for each i, one indicator {\displaystyle \mathbb {I} (z_{i}=j)}\mathbb{I}(z_i=j) is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
      +
在最后一个等式中,对于每个{{math|''i''}},一个指标 <math>\mathbb{I}(z_i=j)</math>等于零,一个指标等于一。因此,内和减少为一项。
      −
'''E步'''
+
====E 步骤 ====
 +
鉴于我们当前对参数''θ''<sup>(''t'')</sup>估计, ''Z''<sub>''i''</sub>的条件分布由贝叶斯定理确定为由 ''τ''加权的正态密度的比例高度:
 +
 
 +
: <math>T_{j,i}^{(t)} := \operatorname{P}(Z_i=j \mid X_i=\mathbf{x}_i ;\theta^{(t)}) = \frac{\tau_j^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_j^{(t)},\Sigma_j^{(t)})}{\tau_1^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_1^{(t)},\Sigma_1^{(t)}) + \tau_2^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_2^{(t)},\Sigma_2^{(t)})}.</math>
 +
 
 +
这些被称为“成员概率”,通常被认为是 E 步骤的输出(尽管这不是下面的 Q 函数)。
 +
 
   −
Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:
+
此 E 步骤对应于为 Q 设置此功能:
 +
: <math>\begin{align}Q(\theta\mid\theta^{(t)})
 +
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x},\mathbf{Z}) ] \\
 +
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log \prod_{i=1}^{n}L(\theta;\mathbf{x}_i,Z_i) ] \\
 +
&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\sum_{i=1}^n \log L(\theta;\mathbf{x}_i,Z_i) ] \\
 +
&= \sum_{i=1}^n\operatorname{E}_{Z_i\mid\mathbf{X};\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x}_i,Z_i) ] \\
 +
&= \sum_{i=1}^n \sum_{j=1}^2 P(Z_i =j \mid X_i = \mathbf{x}_i; \theta^{(t)}) \log L(\theta_j;\mathbf{x}_i,j) \\
 +
&= \sum_{i=1}^n \sum_{j=1}^2 T_{j,i}^{(t)} \big[ \log \tau_j  -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big].
 +
\end{align}</math>
 +
<math>\log L(\theta;\mathbf{x}_i,Z_i)</math>的期望求和的内部是<math>P(Z_i \mid X_i = \mathbf{x}_i; \theta^{(t)})</math>的概率密度函数这可能因人而异<math>\mathbf{x}_i</math> 的训练集。E 步骤中的所有内容在执行该步骤之前都是已知的,除了根据 E 步骤部分开头的方程计算的<math>T_{j,i}</math>。
   −
<nowiki>{\displaystyle T_{j,i}^{(t)}:=\operatorname {P} (Z_{i}=j\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})={\frac {\tau _{j}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j}^{(t)},\Sigma _{j}^{(t)})}{\tau _{1}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{1}^{(t)},\Sigma _{1}^{(t)})+\tau _{2}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{2}^{(t)},\Sigma _{2}^{(t)})}}.}{\displaystyle T_{j,i}^{(t)}:=\operatorname {P} (Z_{i}=j\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})={\frac {\tau _{j}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{j}^{(t)},\Sigma _{j}^{(t)})}{\tau _{1}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{1}^{(t)},\Sigma _{1}^{(t)})+\tau _{2}^{(t)}\ f(\mathbf {x} _{i};{\boldsymbol {\mu }}_{2}^{(t)},\Sigma _{2}^{(t)})}}.}</nowiki>
     −
These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below).
+
这一完整的条件期望不需要一步计算,因为 ''τ''和 '''μ'''/'''Σ''' 出现在单独的线性项中,因此可以独立最大化。
   −
This E step corresponds with setting up this function for Q:
     −
<nowiki>{\displaystyle {\begin{aligned}Q(\theta \mid \theta ^{(t)})&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} ,\mathbf {\theta } ^{(t)}}[\log L(\theta ;\mathbf {x} ,\mathbf {Z} )]\\&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} ,\mathbf {\theta } ^{(t)}}[\log \prod _{i=1}^{n}L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} ,\mathbf {\theta } ^{(t)}}[\sum _{i=1}^{n}\log L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\sum _{i=1}^{n}\operatorname {E} _{Z_{i}\mid \mathbf {X} ;\mathbf {\theta } ^{(t)}}[\log L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\sum _{i=1}^{n}\sum _{j=1}^{2}P(Z_{i}=j\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})\log L(\theta _{j};\mathbf {x} _{i},j)\\&=\sum _{i=1}^{n}\sum _{j=1}^{2}T_{j,i}^{(t)}{\big [}\log \tau _{j}-{\tfrac {1}{2}}\log |\Sigma _{j}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})^{\top }\Sigma _{j}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})-{\tfrac {d}{2}}\log(2\pi ){\big ]}.\end{aligned}}}{\displaystyle {\begin{aligned}Q(\theta \mid \theta ^{(t)})&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} ,\mathbf {\theta } ^{(t)}}[\log L(\theta ;\mathbf {x} ,\mathbf {Z} )]\\&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} ,\mathbf {\theta } ^{(t)}}[\log \prod _{i=1}^{n}L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\operatorname {E} _{\mathbf {Z} \mid \mathbf {X} ,\mathbf {\theta } ^{(t)}}[\sum _{i=1}^{n}\log L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\sum _{i=1}^{n}\operatorname {E} _{Z_{i}\mid \mathbf {X} ;\mathbf {\theta } ^{(t)}}[\log L(\theta ;\mathbf {x} _{i},Z_{i})]\\&=\sum _{i=1}^{n}\sum _{j=1}^{2}P(Z_{i}=j\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})\log L(\theta _{j};\mathbf {x} _{i},j)\\&=\sum _{i=1}^{n}\sum _{j=1}^{2}T_{j,i}^{(t)}{\big [}\log \tau _{j}-{\tfrac {1}{2}}\log |\Sigma _{j}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})^{\top }\Sigma _{j}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{j})-{\tfrac {d}{2}}\log(2\pi ){\big ]}.\end{aligned}}}</nowiki>
+
==== M 步骤====
 +
''Q''(''θ''&nbsp;|&nbsp;''θ''<sup>(''t'')</sup>)在形式上是二次的,这意味着确定 ''θ'' 最大值相对简单。此外, ''τ'', ('''μ'''<sub>1</sub>,''Σ''<sub>1</sub>)('''μ'''<sub>2</sub>,''Σ''<sub>2</sub>)都可以独立最大化,因为它们都出现在单独的线性项中。
   −
<nowiki>The expectation of {\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})}{\displaystyle \log L(\theta ;\mathbf {x} _{i},Z_{i})} inside the sum is taken with respect to the probability density function {\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})}{\displaystyle P(Z_{i}\mid X_{i}=\mathbf {x} _{i};\theta ^{(t)})}, which might be different for each {\displaystyle \mathbf {x} _{i}}\mathbf {x} _{i} of the training set. Everything in the E step is known before the step is taken except {\displaystyle T_{j,i}}{\displaystyle T_{j,i}}, which is computed according to the equation at the beginning of the E step section.</nowiki>
     −
This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.
+
首先,考虑''τ'',其具有约束 ''τ''<sub>1</sub> + ''τ''<sub>2</sub>=1:
 +
: <math>\begin{align}\boldsymbol{\tau}^{(t+1)}
 +
&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}}\  Q(\theta \mid \theta^{(t)} ) \\
 +
&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}} \ \left\{ \left[  \sum_{i=1}^n T_{1,i}^{(t)} \right] \log \tau_1 + \left[  \sum_{i=1}^n T_{2,i}^{(t)} \right] \log \tau_2  \right\}.
 +
\end{align}</math>
   −
'''M步'''
     −
Q(θ | θ(t)) being quadratic in form means that determining the maximizing values of θ is relatively straightforward. Also, τ, (μ1,Σ1) and (μ2,Σ2) may all be maximized independently since they all appear in separate linear terms.
+
这与二项式分布的 MLE 具有相同的形式,因此
   −
To begin, consider τ, which has the constraint τ1 + τ2=1:
+
: <math>\tau^{(t+1)}_j = \frac{\sum_{i=1}^n T_{j,i}^{(t)}}{\sum_{i=1}^n (T_{1,i}^{(t)} + T_{2,i}^{(t)} ) } = \frac{1}{n} \sum_{i=1}^n T_{j,i}^{(t)}.</math>
   −
<nowiki>{\displaystyle {\begin{aligned}{\boldsymbol {\tau }}^{(t+1)}&={\underset {\boldsymbol {\tau }}{\operatorname {arg\,max} }}\ Q(\theta \mid \theta ^{(t)})\\&={\underset {\boldsymbol {\tau }}{\operatorname {arg\,max} }}\ \left\{\left[\sum _{i=1}^{n}T_{1,i}^{(t)}\right]\log \tau _{1}+\left[\sum _{i=1}^{n}T_{2,i}^{(t)}\right]\log \tau _{2}\right\}.\end{aligned}}}{\displaystyle {\begin{aligned}{\boldsymbol {\tau }}^{(t+1)}&={\underset {\boldsymbol {\tau }}{\operatorname {arg\,max} }}\ Q(\theta \mid \theta ^{(t)})\\&={\underset {\boldsymbol {\tau }}{\operatorname {arg\,max} }}\ \left\{\left[\sum _{i=1}^{n}T_{1,i}^{(t)}\right]\log \tau _{1}+\left[\sum _{i=1}^{n}T_{2,i}^{(t)}\right]\log \tau _{2}\right\}.\end{aligned}}} This has the same form as the MLE for the binomial distribution, so</nowiki>
+
对于 ('''μ'''<sub>1</sub>,Σ<sub>1</sub>))的下一个估计:
 +
: <math>\begin{align}(\boldsymbol{\mu}_1^{(t+1)},\Sigma_1^{(t+1)})
 +
&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} Q(\theta \mid \theta^{(t)} ) \\
 +
&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} \sum_{i=1}^n T_{1,i}^{(t)} \left\{ -\tfrac{1}{2} \log |\Sigma_1| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_1)^\top\Sigma_1^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_1) \right\}
 +
\end{align}.</math>
   −
<nowiki>{\displaystyle \tau _{j}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{j,i}^{(t)}}{\sum _{i=1}^{n}(T_{1,i}^{(t)}+T_{2,i}^{(t)})}}={\frac {1}{n}}\sum _{i=1}^{n}T_{j,i}^{(t)}.}{\displaystyle \tau _{j}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{j,i}^{(t)}}{\sum _{i=1}^{n}(T_{1,i}^{(t)}+T_{2,i}^{(t)})}}={\frac {1}{n}}\sum _{i=1}^{n}T_{j,i}^{(t)}.} For the next estimates of (μ1,Σ1):</nowiki>
     −
{\displaystyle {\begin{aligned}({\boldsymbol {\mu }}_{1}^{(t+1)},\Sigma _{1}^{(t+1)})&={\underset <nowiki>{{\boldsymbol {\mu }}</nowiki>_{1},\Sigma _{1}}{\operatorname {arg\,max} }}Q(\theta \mid \theta ^{(t)})\\&={\underset <nowiki>{{\boldsymbol {\mu }}</nowiki>_{1},\Sigma _{1}}{\operatorname {arg\,max} }}\sum _{i=1}^{n}T_{1,i}^{(t)}\left\{-{\tfrac {1}{2}}\log |\Sigma _{1}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1})^{\top }\Sigma _{1}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1})\right\}\end{aligned}}.}{\displaystyle {\begin{aligned}({\boldsymbol {\mu }}_{1}^{(t+1)},\Sigma _{1}^{(t+1)})&={\underset <nowiki>{{\boldsymbol {\mu }}</nowiki>_{1},\Sigma _{1}}{\operatorname {arg\,max} }}Q(\theta \mid \theta ^{(t)})\\&={\underset <nowiki>{{\boldsymbol {\mu }}</nowiki>_{1},\Sigma _{1}}{\operatorname {arg\,max} }}\sum _{i=1}^{n}T_{1,i}^{(t)}\left\{-{\tfrac {1}{2}}\log |\Sigma _{1}|-{\tfrac {1}{2}}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1})^{\top }\Sigma _{1}^{-1}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1})\right\}\end{aligned}}.} This has the same form as a weighted MLE for a normal distribution, so
+
这与正态分布的加权 MLE 具有相同的形式,因此
 +
: <math>\boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}} </math> and <math>\Sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} </math>
   −
<nowiki>{\displaystyle {\boldsymbol {\mu }}_{1}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{1,i}^{(t)}\mathbf {x} _{i}}{\sum _{i=1}^{n}T_{1,i}^{(t)}}}}\boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}}  and {\displaystyle \Sigma _{1}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{1,i}^{(t)}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1}^{(t+1)})(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{1}^{(t+1)})^{\top }}{\sum _{i=1}^{n}T_{1,i}^{(t)}}}}\Sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} and, by symmetry,</nowiki>
+
并且,通过对称性,
   −
<nowiki>{\displaystyle {\boldsymbol {\mu }}_{2}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{2,i}^{(t)}\mathbf {x} _{i}}{\sum _{i=1}^{n}T_{2,i}^{(t)}}}}\boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}} and {\displaystyle \Sigma _{2}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{2,i}^{(t)}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{2}^{(t+1)})(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{2}^{(t+1)})^{\top }}{\sum _{i=1}^{n}T_{2,i}^{(t)}}}.}{\displaystyle \Sigma _{2}^{(t+1)}={\frac {\sum _{i=1}^{n}T_{2,i}^{(t)}(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{2}^{(t+1)})(\mathbf {x} _{i}-{\boldsymbol {\mu }}_{2}^{(t+1)})^{\top }}{\sum _{i=1}^{n}T_{2,i}^{(t)}}}.}</nowiki>
+
: <math>\boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}} </math> and <math>\Sigma_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)})^\top }{\sum_{i=1}^n T_{2,i}^{(t)}}</math>
    
==== 终止 ====
 
==== 终止 ====
Conclude the iterative process if {\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon }{\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon } for {\displaystyle \varepsilon }\varepsilon below some preset threshold.
     −
如果 {\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1)};\mathbf {x} ,\mathbf {Z} )]+\varepsilon }{\displaystyle E_{Z\mid \theta ^{(t)},\mathbf {x} }[\log L(\theta ^{(t)};\ mathbf {x} ,\mathbf {Z} )]\leq E_{Z\mid \theta ^{(t-1)},\mathbf {x} }[\log L(\theta ^{(t-1) };\mathbf {x} ,\mathbf {Z} )]+\varepsilon } 用于 {\displaystyle \varepsilon }\varepsilon 低于某个预设阈值终止迭代过程。
+
如果<math> E_{Z\mid\theta^{(t)},\mathbf{x}}[\log L(\theta^{(t)};\mathbf{x},\mathbf{Z})] \leq E_{Z\mid\theta^{(t-1)},\mathbf{x}}[\log L(\theta^{(t-1)};\mathbf{x},\mathbf{Z})] + \varepsilon</math> for <math> \varepsilon </math> 低于某个预设阈值终止迭代过程。
    
'''一般化'''
 
'''一般化'''
   −
The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.
+
上面说明的算法可以推广到两个以上多元正态分布的混合。
   −
上面说明的算法可以推广到两个以上多元正态分布的混合。
      
'''截断和删减回归'''
 
'''截断和删减回归'''
   −
The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model. Special cases of this model include censored or truncated observations from one normal distribution.
+
EM 算法已在存在解释某些量变化的基础线性回归模型的情况下实施,但实际观察到的值是模型中表示的那些值的删失或截断版本。 此模型的特殊情况包括来自一个正态分布的删失或截断观察。
 +
 
 +
 
 +
==参考文献==
 +
<references/>
 +
 
 +
 
 +
 
 +
==进一步阅读==
 +
* {{cite book |first1=Robert |last1=Hogg |first2=Joseph |last2=McKean |author-link3=Allen Craig |first3=Allen |last3=Craig |title=Introduction to Mathematical Statistics |pages=359–364 |location=Upper Saddle River, NJ |publisher=Pearson Prentice Hall |year=2005 }}
 +
* {{cite journal |citeseerx= 10.1.1.9.9735 |title= The Expectation Maximization Algorithm|first=Frank |last= Dellaert |author-link= Frank Dellaert |year= 2002}} gives an easier explanation of EM algorithm as to lowerbound maximization.
 +
* {{cite book
 +
|last1 = Bishop |first1 = Christopher M.
 +
|author-link = Christopher Bishop
 +
|title = Pattern Recognition and Machine Learning
 +
|year = 2006
 +
|publisher = Springer
 +
|isbn = 978-0-387-31073-2
 +
}}
 +
* {{cite journal |doi=10.1561/2000000034 |title=Theory and Use of the EM Algorithm |journal=Foundations and Trends in Signal Processing |volume=4 |issue=3 |pages=223–296 |first1=M. R. |last1=Gupta |first2=Y. |last2=Chen |year=2010 |citeseerx=10.1.1.219.6830 }} A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet.
 +
* {{cite journal |citeseerx= 10.1.1.28.613 |title= A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models|first=Jeff |last= Bilmes |year= 1998}} includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models.
 +
* {{cite book |first1=Geoffrey J. |last1=McLachlan |first2=Thriyambakam |last2=Krishnan |title=The EM Algorithm and Extensions |location=Hoboken |publisher=Wiley |edition=2nd |year=2008 |isbn=978-0-471-20170-0 }}
   −
EM 算法已在存在解释某些量变化的基础线性回归模型的情况下实施,但实际观察到的值是模型中表示的那些值的删失或截断版本。 此模型的特殊情况包括来自一个正态分布的删失或截断观察。
+
----
 +
本中文词条由Xugami审校,[[用户:薄荷|薄荷]]编辑,如有问题,欢迎在讨论页面留言。
   −
== 选择 ==
  −
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed ''moment-based approaches'' or the so-called ''spectral techniques''. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
     −
EM 通常收敛到局部最优,不一定是全局最优,通常对收敛速度没有限制。它可能在高维度上任意差,并且可能存在指数数量的局部最优。因此,尤其是在高维设置中, 需要有保证学习的替代方法。 EM 的替代方案可以更好地保证一致性,称为基于矩的方法(''moment-based approaches)''或所谓的谱技术(''spectral techniques)''。学习概率模型参数的基于矩的方法最近越来越受到关注,因为它们在某些条件下享有诸如全局收敛之类的保证,不像 EM 经常受到陷入局部最优的问题的困扰。具有学习保证的算法可以由许多重要模型(例如混合模型、HMM 等)推导出。对于这些谱方法,不会出现虚假的局部最优,并且在某些规律性条件下可以一致地估计真实参数。
     −
<small>This page was moved from [[wikipedia:en:Expectation–maximization algorithm]]. Its edit history can be viewed at [[EM算法/edithistory]]</small>
+
'''本词条内容源自wikipedia及公开资料,遵守 CC3.0协议。'''
[[Category:待整理页面]]
 
7,129

个编辑

导航菜单