EM算法

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索

此词条暂由彩云小译翻译,翻译字数共2199,未经人工整理和审校,带来阅读不便,请见谅。

模板:Machine learning bar


In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

在统计学中,期望最大化(EM)算法是一种在统计模型中寻找(局部)最大似然或最大后验(MAP)估计的迭代法,其中模型依赖于未观测的潜在变量。EM 迭代在执行期望(e)步和最大化(m)步之间交替进行,前者为使用当前参数估计计算的对数似然的期望创建一个函数,后者计算参数最大化在 e 步中找到的期望对数似然。这些参数估计然后用来确定潜变量的分布在下一步 e。


EM clustering of Old Faithful eruption data. The random initial model (which, due to the different scales of the axes, appears to be two very flat and wide spheres) is fit to the observed data. In the first iterations, the model changes substantially, but then converges to the two modes of the geyser. Visualized using ELKI.

EM clustering of [[Old Faithful eruption data. The random initial model (which, due to the different scales of the axes, appears to be two very flat and wide spheres) is fit to the observed data. In the first iterations, the model changes substantially, but then converges to the two modes of the geyser. Visualized using ELKI.]]

[老忠实火山喷发数据]的 EM 聚类。随机初始模型(由于轴的不同尺度,看起来是两个非常平坦和宽的球体)适合观测数据。在第一次迭代中,模型发生了实质性的变化,但随后收敛到间歇泉的两个模态。使用 ELKI. ]


History

The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin.[1] They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith.[2] A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers[3][4][5] following his collaboration with Per Martin-Löf and Anders Martin-Löf.[6][7][8][9][10][11][12] The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.

The observed data points [math]\displaystyle{ \mathbf{X} }[/math] may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.

观察到的数据点 < math > mathbf { x } </math > 可以是离散的(在有限或可数集中取值)或连续的(在不可数的无限集中取值)。与每个数据点相关联的可能是一个观测矢量。


The missing values (aka latent variables) [math]\displaystyle{ \mathbf{Z} }[/math] are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.

缺失值(又名潜在变量) < math > mathbf { z } </math > 是离散的,从固定数量的值中提取,每个观察单位有一个潜在变量。

The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983.[13]

Wu's proof established the EM method's convergence outside of the exponential family, as claimed by Dempster–Laird–Rubin.[13]

EM is especially useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).

当可能性是指数族时,EM 特别有用: e 步成为充分统计量的期望和,m 步涉及最大化线性函数。在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发表,使用 Per Martin-Löf 和 Anders Martin-Löf 未发表的结果)推导出每个步骤的解析解更新。


Introduction

The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.

在 Dempster、 Laird 和 Rubin 的原始论文中,EM 方法被修改为计算贝叶斯推断的最大后验概率估计。

The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.


Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.

还有其他方法可以找到最大似然估计,比如梯度下降法、共轭梯度或 Gauss-Newton 算法的变体。与 EM 不同,这类方法通常需要对似然函数的一阶和/或二阶导数求值。

Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.


The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a saddle point.[13] In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.

Expectation-maximization works to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] rather than directly improving [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math]. Here is shown that improvements to the former imply improvements to the latter. GEM is further developed in a distributed environment and shows promising results.

期望-最大化工程改善 < math > q (黑体字 theta 中黑体字 theta ^ {(t)}) </math > 而不是直接改善 < math > log p (mathbf { x }中黑体字 theta) </math > 。这里显示前者的改进意味着后者的改进。在分布式环境下,GEM 得到了进一步的发展,取得了良好的效果。


[math]\displaystyle{ X_i \mid(Z_i = 1) \sim \mathcal{N}_d(\boldsymbol{\mu}_1,\Sigma_1) }[/math] and [math]\displaystyle{ X_i \mid(Z_i = 2) \sim \mathcal{N}_d(\boldsymbol{\mu}_2,\Sigma_2), }[/math]

[数学] × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × ×

Description

where

在哪里

Given the statistical model which generates a set [math]\displaystyle{ \mathbf{X} }[/math] of observed data, a set of unobserved latent data or missing values [math]\displaystyle{ \mathbf{Z} }[/math], and a vector of unknown parameters [math]\displaystyle{ \boldsymbol\theta }[/math], along with a likelihood function [math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta) }[/math], the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data

[math]\displaystyle{ \operatorname{P} (Z_i = 1 ) = \tau_1 \,  }[/math] and [math]\displaystyle{ \operatorname{P} (Z_i=2) = \tau_2 = 1-\tau_1. }[/math]

(z _ i = 1) = tau _ 1,</math > 和 < math > 操作员名{ p }(z _ i = 2) = tau _ 2 = 1-tau _ 1. </math >


[math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} }[/math]

The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:

其目的是估计代表高斯函数与每个函数的均值和协方差之间的混合值的未知参数:


[math]\displaystyle{ \theta = \big( \boldsymbol{\tau},\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\Sigma_1,\Sigma_2 \big), }[/math]

1,黑体符号{ mu }2,Sigma _ 1,Sigma _ 2 big) ,</math >

However, this quantity is often intractable (e.g. if [math]\displaystyle{ \mathbf{Z} }[/math] is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult).

where the incomplete-data likelihood function is

不完全数据似然函数在哪里


[math]\displaystyle{ L(\theta;\mathbf{x}) =  \prod_{i=1}^n \sum_{j=1}^2 \tau_j \ f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j), }[/math]

(mathbf { x }) = prod _ { i = 1} ^ n sum _ { j = 1} ^ 2 tau _ j f (mathbf { x } _ i; 粗体符{ mu } _ j,Sigma _ j) ,</math >

The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:

Expectation step (E step): Define [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] as the expected value of the log likelihood function of [math]\displaystyle{ \boldsymbol\theta }[/math], with respect to the current conditional distribution of [math]\displaystyle{ \mathbf{Z} }[/math] given [math]\displaystyle{ \mathbf{X} }[/math] and the current estimates of the parameters [math]\displaystyle{ \boldsymbol\theta^{(t)} }[/math]:

and the complete-data likelihood function is

完全数据似然函数是

[math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z}) \right] \, }[/math]
[math]\displaystyle{ L(\theta;\mathbf{x},\mathbf{z}) = p(\mathbf{x},\mathbf{z} \mid \theta) = \prod_{i=1}^n  \prod_{j=1}^2  \ [f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) \tau_j] ^{\mathbb{I}(z_i=j)}, }[/math]

< math > l (theta; mathbf { x } ,mathbf { z }) = p (mathbf { x } ,mathbf { z } mid theta) = prod _ { i = 1} ^ n prod _ { j = 1} ^ 2[ f (mathbf { x } _ i; boldsymbol { mu } _ j,Sigma _ j) tau _ j ] ^ { math{ i }(z _ i = j)} ,</math >


Maximization step (M step): Find the parameters that maximize this quantity:

or

[math]\displaystyle{ \boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, }[/math]


[math]\displaystyle{ L(\theta;\mathbf{x},\mathbf{z}) = \exp \left\{ \sum_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] \right\}, }[/math]

[ math > l (theta; mathbf { x } ,mathbf { z }) = exp 左{ sum { i = 1} n sum { j = 1} ^ 2 mathbb { i }(z _ i = j) big [ log tau _ j-tfrac {1}{2} log | Sigma _ j | tfrac {1}{2}(x } i-Sigma _ j } _ j) ^ top _ j ^-1}(mathbf { x } i-boldmu {符号}{ j)-frac {2 log {2}(2))大数学右,/>

The typical models to which EM is applied use [math]\displaystyle{ \mathbf{Z} }[/math] as a latent variable indicating membership in one of a set of groups:

  1. The observed data points [math]\displaystyle{ \mathbf{X} }[/math] may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.

where [math]\displaystyle{ \mathbb{I} }[/math] is an indicator function and [math]\displaystyle{ f }[/math] is the probability density function of a multivariate normal.

其中 mathbb { i } </math > 是指示函数,而 f </math > 是多元正态分布的概率密度函数。

  1. The missing values (aka latent variables) [math]\displaystyle{ \mathbf{Z} }[/math] are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.
  1. The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).

In the last equality, for each , one indicator [math]\displaystyle{ \mathbb{I}(z_i=j) }[/math] is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.

在最后一个等式中,对于每个等式,一个指示符 < math > mathbb { i }(z _ i = j) </math > 等于零,一个指示符等于1。因此,内和归结为一项。

However, it is possible to apply EM to other sorts of models.


The motive is as follows. If the value of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] is known, usually the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math] can be found by maximizing the log-likelihood over all possible values of [math]\displaystyle{ \mathbf{Z} }[/math], either simply by iterating over [math]\displaystyle{ \mathbf{Z} }[/math] or through an algorithm such as the Baum–Welch algorithm for hidden Markov models. Conversely, if we know the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math], we can find an estimate of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both [math]\displaystyle{ \boldsymbol\theta }[/math] and [math]\displaystyle{ \mathbf{Z} }[/math] are unknown:

  1. First, initialize the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] to some random values.

Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:

给出了参数 θ < sup > (t) 的当前估计,z < sub > i 的条件分布用 Bayes 定理确定为正态密度的比例高度 τ:

  1. Compute the probability of each possible value of [math]\displaystyle{ \mathbf{Z} }[/math] , given [math]\displaystyle{ \boldsymbol\theta }[/math].
[math]\displaystyle{ T_{j,i}^{(t)} := \operatorname{P}(Z_i=j \mid X_i=\mathbf{x}_i ;\theta^{(t)}) = \frac{\tau_j^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_j^{(t)},\Sigma_j^{(t)})}{\tau_1^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_1^{(t)},\Sigma_1^{(t)}) + \tau_2^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_2^{(t)},\Sigma_2^{(t)})}. }[/math]

句子太长,请提供一个短句

  1. Then, use the just-computed values of [math]\displaystyle{ \mathbf{Z} }[/math] to compute a better estimate for the parameters [math]\displaystyle{ \boldsymbol\theta }[/math].
  1. Iterate steps 2 and 3 until convergence.

These are called the "membership probabilities", which are normally considered the output of the E step (although this is not the Q function of below).

这些被称为“成员概率” ,通常被认为是 e 步的输出(尽管这不是下面的 q 函数)。

The algorithm as just described monotonically approaches a local minimum of the cost function.


This E step corresponds with setting up this function for Q:

这个 e 步对应于为 q 设置这个函数:

Properties

[math]\displaystyle{ \begin{align}Q(\theta\mid\theta^{(t)})

\lt  math \gt  begin { align } q (theta mid theta ^ {(t)})

Speaking of an expectation (E) step is a bit of a [[misnomer]]. What are calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.

&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x},\mathbf{Z}) ] \\

和 = operatorname { e } _ { mathbf { z } mid mathbf { x } ,mathbf { theta } ^ {(t)}}[ log l (theta; mathbf { x } ,mathbf { z }]



&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log \prod_{i=1}^{n}L(\theta;\mathbf{x}_i,Z_i) ] \\

[日志 prod _ { i = 1}{ n } l (theta; mathbf { x } _ i,z _ i)]

Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a [[maximum likelihood estimator]]. For [[bimodal distribution|multimodal distributions]], this means that an EM algorithm may converge to a [[local maximum]] of the observed data likelihood function, depending on starting values. A variety of heuristic or [[metaheuristic]] approaches exist to escape a local maximum, such as random-restart [[hill climbing]] (starting with several different random initial estimates ''θ''\lt sup\gt (''t'')\lt /sup\gt ), or applying [[simulated annealing]] methods.

&= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\sum_{i=1}^n \log L(\theta;\mathbf{x}_i,Z_i) ] \\

和 = operatorname { e } _ { mathbf { z } mid mathbf { x } ,mathbf { theta } ^ {(t)}}[ sum _ { i = 1} ^ n log l (theta; mathbf { x } _ i,z _ i)]



&= \sum_{i=1}^n\operatorname{E}_{Z_i\mid\mathbf{X};\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x}_i,Z_i) ] \\

{ e }{ z _ i mid mathbf { x } ; mathbf { theta } ^ {(t)}[ log l (theta; mathbf { x } _ i,z _ i)]

EM is especially useful when the likelihood is an [[exponential family]]: the E step becomes the sum of expectations of [[sufficient statistic]]s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive [[closed-form expression]] updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of [[Per Martin-Löf]] and [[Anders Martin-Löf]]).\lt ref name="Sundberg1971"/\gt \lt ref name="Sundberg1976"/\gt \lt ref name="Martin-Löf1963"/\gt \lt ref name="Martin-Löf1966"/\gt \lt ref name="Martin-Löf1970"/\gt \lt ref name="Martin-Löf1974a"/\gt \lt ref name="Martin-Löf1974b"/\gt 

&= \sum_{i=1}^n \sum_{j=1}^2 P(Z_i =j \mid X_i = \mathbf{x}_i; \theta^{(t)}) \log L(\theta_j;\mathbf{x}_i,j) \\

和 = sum { i = 1} n sum { j = 1} ^ 2 p (z _ i = j mid xi = mathbf { x } _ i; theta ^ {(t)}) log l (theta _ j; mathbf { x } _ i,j)



&= \sum_{i=1}^n \sum_{j=1}^2 T_{j,i}^{(t)} \big[ \log \tau_j  -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big].

和 = sum { i = 1} n sum { j = 1} ^ 2 t _ { j,i }{(t)} big [ log tau _ j-tfrac {1}{2} log | Sigma _ j |-tfrac {1}{2}(mathbf { x } _ i-boldsymbol { mu } _ j) ^ top _ Sigma ^ {1}(mathbf { x } _ i-boldsymbol {}{} _ j)-frac {2}(2 pi) big log ]。

The EM method was modified to compute [[maximum a posteriori]] (MAP) estimates for [[Bayesian inference]] in the original paper by Dempster, Laird, and Rubin.

\end{align} }[/math]

结束{ align } </math >


The expectation of [math]\displaystyle{ \log L(\theta;\mathbf{x}_i,Z_i) }[/math] inside the sum is taken with respect to the probability density function [math]\displaystyle{ P(Z_i \mid X_i = \mathbf{x}_i; \theta^{(t)}) }[/math], which might be different for each [math]\displaystyle{ \mathbf{x}_i }[/math] of the training set. Everything in the E step is known before the step is taken except [math]\displaystyle{ T_{j,i} }[/math], which is computed according to the equation at the beginning of the E step section.

在求和内部,对于概率密度函数的期望值取相对于训练集的数学公式 p (z _ i mid x _ i = mathbf { x } _ i; theta ^ {(t)}) </math > ,这对于每个 < math > bf { x } _ i </math > 的数学公式可能是不同的。在执行 e 步之前,e 步中的所有内容都是已知的,只有 < math > t _ { j,i } </math > 除外,它是根据 e 步部分开头的方程计算出来的。

Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.


This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.

这个完整的条件期望不需要一步计算,因为 τ 和 μ/σ 是分开的线性项,因此可以独立地最大化。

Proof of correctness

Expectation-maximization works to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] rather than directly improving [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math]. Here is shown that improvements to the former imply improvements to the latter.[14]


Q(θ | θ(t)) being quadratic in form means that determining the maximizing values of θ is relatively straightforward. Also, τ, (μ11) and (μ22) may all be maximized independently since they all appear in separate linear terms.

Q (θ | θ < sup > (t) )为二次型,因此确定 θ 的最大值相对比较简单。τ,(μ < sub > 1 ,σ < sub > 1 )和(μ < sub > 2 ,σ < sub > 2 )都可以独立地最大化,因为它们都是以独立的线性项出现的。

For any [math]\displaystyle{ \mathbf{Z} }[/math] with non-zero probability [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) }[/math], we can write

[math]\displaystyle{ To begin, consider τ, which has the constraint τ\lt sub\gt 1\lt /sub\gt + τ\lt sub\gt 2\lt /sub\gt =1: 首先,考虑 τ,其约束 τ \lt sub \gt 1 \lt /sub \gt + τ \lt sub \gt 2 \lt /sub \gt = 1: \log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta). \lt math\gt \begin{align}\boldsymbol{\tau}^{(t+1)} \lt math \gt begin { align } boldsymbol { tau } ^ {(t + 1)} }[/math]

&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}}\ Q(\theta \mid \theta^{(t)} ) \\

下集{ boldsymbol { tau }{ operatorname { arg,max } q (theta mid theta ^ {(t)})

We take the expectation over possible values of the unknown data [math]\displaystyle{ \mathbf{Z} }[/math] under the current parameter estimate [math]\displaystyle{ \theta^{(t)} }[/math] by multiplying both sides by [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) }[/math] and summing (or integrating) over [math]\displaystyle{ \mathbf{Z} }[/math]. The left-hand side is the expectation of a constant, so we get:

&= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}} \ \left\{ \left[ \sum_{i=1}^n T_{1,i}^{(t)} \right] \log \tau_1 + \left[ \sum_{i=1}^n T_{2,i}^{(t)} \right] \log \tau_2 \right\}.

左{左[ sum _ { i = 1} ^ n t _ {1,i }{(t)}右] log tau _ 1 + 左[ sum _ { i = 1} ^ n t _ {2,i }{(t)}右] log tau _ 2。

[math]\displaystyle{ \end{align} }[/math]

结束{ align } </math >

\begin{align}

This has the same form as the MLE for the binomial distribution, so

这与二项分布的 MLE 形式相同,所以

\log p(\mathbf{X}\mid\boldsymbol\theta) &

[math]\displaystyle{ \tau^{(t+1)}_j = \frac{\sum_{i=1}^n T_{j,i}^{(t)}}{\sum_{i=1}^n (T_{1,i}^{(t)} + T_{2,i}^{(t)} ) } = \frac{1}{n} \sum_{i=1}^n T_{j,i}^{(t)}. }[/math]

< math > tau ^ {(t + 1)} _ j = frac { sum _ { i = 1} ^ n t _ { j,i }{ i = 1} ^ n (t _ {1,i }{(t)} + t _ {2,i }{(t)})} = frac { n } sum _ { i = 1} n t _ { j,i }{(t)} </math >

= \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta)

- \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\

For the next estimates of (μ11):

下一个估计(μ < sub > 1 ,σ < sub > 1 ) :

& = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}),

[math]\displaystyle{ \begin{align}(\boldsymbol{\mu}_1^{(t+1)},\Sigma_1^{(t+1)})

\lt  math \gt  begin { align }(粗体符号{ mu } _ 1 ^ {(t + 1)} ,Sigma _ 1 ^ {(t + 1)})

\end{align}

&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} Q(\theta \mid \theta^{(t)} ) \\

1,Sigma 1} operatorname { arg,max } q (theta mid theta ^ {(t)})

 }[/math]

&= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} \sum_{i=1}^n T_{1,i}^{(t)} \left\{ -\tfrac{1}{2} \log |\Sigma_1| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_1)^\top\Sigma_1^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_1) \right\}

下设{ boldsymbol { mu } _ 1,Sigma _ 1}操作符{ arg,max } sum _ { i = 1} ^ n t _ {1,i } ^ {(t)}左{-tfrac {1}{2} log | Sigma _ 1 |-tfrac {1}{2}(mathbf { x } i-boldsymbol { mu } _ 1) ^ top _ 1 ^ {-1}({ x } i-符号{ mu _ 1)右}

where [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] is defined by the negated sum it is replacing.

\end{align}.</math>

结束{ align } . </math >

This last equation holds for every value of [math]\displaystyle{ \boldsymbol\theta }[/math] including [math]\displaystyle{ \boldsymbol\theta = \boldsymbol\theta^{(t)} }[/math],

This has the same form as a weighted MLE for a normal distribution, so

这与正态分布的加权极大似然估计形式相同,因此

[math]\displaystyle{ \lt math\gt \boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}} }[/math] and [math]\displaystyle{ \Sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} }[/math]

< math > boldsymbol { mu } _ 1 ^ {(t + 1)} = frac { sum _ { i = 1} ^ n t _ {1,i }{ x } _ i }{ sum _ { i = 1} n t _ {1,i }{ i } ^ {数学 > < math > Sigma _ 1 ^ {(t + 1)} = frac { sum _ i = 1} n t _ {1,i }{(t)}(mathbf { x } _ i-boldsymbol { mu } _ 1 ^ {(t + 1)){(t + 1)})(mathbf { x } i-boldsymbol { mu }1 ^ {(t + 1)}){(t + 1)}) ^ top }{ sum { i = 1} ^ n t _ {1,i }{(t)}}} </math >

\log p(\mathbf{X}\mid\boldsymbol\theta^{(t)})

and, by symmetry,

通过对称,

= Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}),

[math]\displaystyle{ \boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}}  }[/math] and [math]\displaystyle{ \Sigma_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)})^\top }{\sum_{i=1}^n T_{2,i}^{(t)}}. }[/math]

句子太长,请提供一个短句

</math>

and subtracting this last equation from the previous equation gives

[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) Conclude the iterative process if \lt math\gt E_{Z\mid\theta^{(t)},\mathbf{x}}[\log L(\theta^{(t)};\mathbf{x},\mathbf{Z})] \leq E_{Z\mid\theta^{(t-1)},\mathbf{x}}[\log L(\theta^{(t-1)};\mathbf{x},\mathbf{Z})] + \varepsilon }[/math] for [math]\displaystyle{ \varepsilon }[/math] below some preset threshold.

如果 < math > e _ { z mid theta ^ {(t)} ,mathbf { x }[ log l (theta ^ {(t)} ; mathbf { x } ,mathbf { z })] leq e _ { z mid theta ^ {(t-1)} ,mathbf { x }[ log l (theta ^ {(t-1)} ; mathbf { x } ,z })] + varepsillon </math > </math > < < < </math > > varepsilon </math > > > > 。

= Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)})

+ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}),

</math>

However, Gibbs' inequality tells us that [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) }[/math], so we can conclude that

The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.

上述算法可以推广到多于两个多元正态分布的混合情况。

[math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) \ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}). }[/math]

The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.

算法已经实现在这样的情况下,一个基础的线性回归/模型存在解释一些数量的变化,但实际观察到的数值是审查或截断版本的那些表示在模型中。

In words, choosing [math]\displaystyle{ \boldsymbol\theta }[/math] to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] causes [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math] to improve at least as much.


As a maximization–maximization procedure

EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.

EM 通常收敛到一个局部最优解,而不一定是全局最优解,收敛速度一般没有界限。这是可能的,它可以任意贫穷的高维,并可能有一个指数数目的局部最优化。因此,需要其他方法来保证学习,特别是在高维度的设置。有更好的一致性保证的 EM 替代方法存在,这被称为基于时刻的方法或所谓的光谱技术。基于矩的概率模型参数学习方法近年来越来越受到人们的关注,因为它们在一定条件下具有全局收敛性等保证。具有学习保证的算法可以推导出一些重要的模型,如混合模型、隐马尔可夫模型等。对于这些谱方法,没有出现伪局部最优,并且在一定的正则性条件下可以一致地估计真实参数。

The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent.[15][16] Consider the function:

[math]\displaystyle{ F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), }[/math]

where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as

[math]\displaystyle{ F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), }[/math]

where [math]\displaystyle{ p_{Z\mid X}(\cdot\mid x;\theta ) }[/math] is the conditional distribution of the unobserved data given the observed data [math]\displaystyle{ x }[/math] and [math]\displaystyle{ D_{KL} }[/math] is the Kullback–Leibler divergence.


Then the steps in the EM algorithm may be viewed as:

Expectation step: Choose [math]\displaystyle{ q }[/math] to maximize [math]\displaystyle{ F }[/math]:
[math]\displaystyle{ q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) }[/math]
Maximization step: Choose [math]\displaystyle{ \theta }[/math] to maximize [math]\displaystyle{ F }[/math]:
[math]\displaystyle{ \theta^{(t+1)} = \operatorname{arg\,max}_\theta \ F(q^{(t)},\theta) }[/math]


Applications

EM is frequently used for data clustering in machine learning and computer vision. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars.


EM is frequently used for parameter estimation of mixed models,[17][18] notably in quantitative genetics.[19]


|last1 = Bishop |first1 = Christopher M.

1 = Bishop | first1 = Christopher m.

In psychometrics, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models.

|author-link = Christopher Bishop

克里斯托弗 · 毕晓普


|title = Pattern Recognition and Machine Learning

| title = 模式识别和机器学习

With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.[citation needed]

|year = 2006

2006年


|publisher = Springer

| publisher = Springer

The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM.

|ref = CITEREFBishop2006

2006


|isbn = 978-0-387-31073-2

| isbn = 978-0-387-31073-2

In structural engineering, the Structural Identification using Expectation Maximization (STRIDE)[20] algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis).

}}
}}


Filtering and smoothing EM algorithms

A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.


Filtering and smoothing EM algorithms arise by repeating this two-step procedure:


E-step
Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.


M-step
Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.


Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation

[math]\displaystyle{ \widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2, }[/math]


where [math]\displaystyle{ \widehat{x}_k }[/math] are scalar output estimates calculated by a filter or a smoother from N scalar measurements [math]\displaystyle{ z_k }[/math]. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by

Category:Estimation methods

类别: 估算方法

[math]\displaystyle{ \widehat{\sigma}^2_w = \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat{{x}}_k)}^2, }[/math]

Category:Machine learning algorithms

类别: 机器学习算法


Category:Missing data

类别: 缺失数据

where [math]\displaystyle{ \widehat{x}_k }[/math] and [math]\displaystyle{ \widehat{x}_{k+1} }[/math] are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via

Category:Statistical algorithms

类别: 统计算法

[math]\displaystyle{ \widehat{F} = \frac{\sum_{k=1}^N (\widehat{x}_{k+1}-\widehat{F} \widehat{x}_k)}{\sum_{k=1}^N \widehat{x}_k^2}. }[/math]

Category:Optimization algorithms and methods

类别: 优化算法和方法


Category:Cluster analysis algorithms

类别: 数据聚类算法


This page was moved from wikipedia:en:Expectation–maximization algorithm. Its edit history can be viewed at EM算法/edithistory

  1. The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin. 算法在1977年由 Arthur Dempster,Nan Laird 和 Donald Rubin 发表的一篇经典论文中得到了解释和命名。 Dempster, A.P.; Laird, N.M.; Rubin The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. EM 算法用于在方程不能直接求解的情况下寻找统计模型的(局部)最大似然参数。通常这些模型除了未知参数和已知数据观测值外,还涉及潜变量。也就是说,要么数据中存在缺失值,要么通过假设存在更多未观测到的数据点,可以更简单地建立模型。例如,通过假设每个观测数据点都有一个对应的未观测数据点或潜变量,并指定每个数据点所属的混合成分,可以更简单地描述混合模型。, D.B. (1977). "Maximum Likelihood from Incomplete Data via the EM Algorithm Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. 寻找最大似然解一般需要求解似然函数对所有未知值、参数和潜变量的导数,并同时求解结果方程。在含有潜在变量的统计模型中,这通常是不可能的。相反,其结果通常是一组联锁方程,其中参数的解决需要潜在变量的值,反之亦然,但将一组方程替换成另一组方程,就会产生一个无法解决的方程。". Journal of the Royal Statistical Society, Series B. 39 (1): 1–38 The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. EM 算法是从观察到有一种数值求解这两组方程的方法出发的。我们可以简单地从两组未知数中选择任意一组,用它们来估计第二组未知数,然后用这些新的值来找到第一组未知数的更好的估计,然后在两组未知数之间交替,直到最终的值都收敛到不动点。这种方法是否有效并不明显,但可以证明在这种情况下是有效的,而且可能性的导数在这一点是(任意接近)零,这反过来意味着这一点要么是最大值,要么是鞍点。一般来说,可能会出现多个最大值,但不能保证找到全局最大值。有些可能还有奇点,即无意义的极大值。例如,在混合模型中,EM 可能找到的解之一涉及到将一个组分设置为方差为零,同一组分的均值参数等于一个数据点。. JSTOR 2984875. MR 0501537. {{cite journal}}: line feed character in |last3= at position 6 (help); line feed character in |pages= at position 5 (help); line feed character in |title= at position 61 (help)CS1 maint: multiple names: authors list (link)
  2. Ceppelini, R.M. (1955). "The estimation of gene frequencies in a random-mating population". Ann. Hum. Genet. 20 (2): 97–115. doi:10.1111/j.1469-1809.1955.tb01360.x. PMID 13268982. S2CID 38625779.
  3. {{cite journal Given the statistical model which generates a set [math]\displaystyle{ \mathbf{X} }[/math] of observed data, a set of unobserved latent data or missing values [math]\displaystyle{ \mathbf{Z} }[/math], and a vector of unknown parameters [math]\displaystyle{ \boldsymbol\theta }[/math], along with a likelihood function [math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta) }[/math], the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data 给定一个统计模型,该模型对观测数据生成一组 < math > mathbf { x } </math > ,一组未观测到的潜在数据或缺失值 < math > mathbf { z } </math > ,一个未知参数的向量 < math > boldsymbol theta </math > ,以及一个似然函数 < math > l (boldsymbol theta; bf { x } ,mathbf { z }) = p (mathbf { x } ,mathbf { z } mid boldtheta) </math > ,未知参数的最大似然估计(MLE)由观测数据的可能性最大化来确定 |last=Sundberg |first=Rolf |title=Maximum likelihood theory for incomplete data from an exponential family [math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} }[/math] < math > l (黑体字 theta; mathbf { x }) = p (mathbf { x }中黑体字 theta) = int p (mathbf { x } ,mathbf { z }中黑体字 theta) ,d mathbf { z } </math > |journal=Scandinavian Journal of Statistics |volume=1 |year=1974 |issue=2 |pages=49–58 However, this quantity is often intractable (e.g. if [math]\displaystyle{ \mathbf{Z} }[/math] is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult). 然而,这个数量通常是难以处理的(例如:。如果 < math > mathbf { z } </math > 是一个事件序列,因此值的数目随序列长度呈指数增长,那么精确计算和将极其困难)。 |jstor=4615553 |mr= 381110 }}
  4. The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps: EM 算法通过迭代应用这两个步骤来寻找边际似然的最大似然估计: Rolf Sundberg. 1971. Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable. Dissertation, Institute for Mathematical Statistics, Stockholm University.
  5. {{cite journal Expectation step (E step): Define [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] as the expected value of the log likelihood function of [math]\displaystyle{ \boldsymbol\theta }[/math], with respect to the current conditional distribution of [math]\displaystyle{ \mathbf{Z} }[/math] given [math]\displaystyle{ \mathbf{X} }[/math] and the current estimates of the parameters [math]\displaystyle{ \boldsymbol\theta^{(t)} }[/math]: 期望步骤(e 步) : 定义 < math > q (boldsymbol theta mid boldsymbol theta ^ {(t)}) </math > 作为 < math > boldsymbol theta </math > 的对数似然函数的期望值,关于当前条件分布 < math > mathbf { z } </math > 给定的 < bf { x } </math > 和当前参数的估计 < math > boldsymbol theta ^ (t) </math > : |last=Sundberg |first=Rolf [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z}) \right] \, }[/math] [ math > q (boldsymbol theta mid boldsymbol theta ^ {(t)}) = operatorname { e } _ { z } mid mathbf { x } ,boldsymbol theta ^ {(t)}}左[ log l (boldsymbol theta; math{ x } ,mathbf { z })右] ,</math > |year=1976 |title=An iterative method for solution of the likelihood equations for incomplete data from exponential families Maximization step (M step): Find the parameters that maximize this quantity: 最大化步骤(m 步骤) : 找到最大化这个数量的参数: |journal=Communications in Statistics – Simulation and Computation [math]\displaystyle{ \boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, }[/math] 1) = underset { boldsymbol theta }{ operatorname { arg,max } q (boldsymbol theta mid boldsymbol theta ^ {(t)}) ,</math > |volume=5 |issue=1 |pages=55–64 |doi=10.1080/03610917608812007 |mr=443190 The typical models to which EM is applied use [math]\displaystyle{ \mathbf{Z} }[/math] as a latent variable indicating membership in one of a set of groups: 使用 EM 的典型模型使用 < math > mathbf { z } </math > 作为一个潜在变量,指示一组组中某一组的成员: }}
  6. See the acknowledgement by Dempster, Laird and Rubin on pages 3, 5 and 11.
  7. G. Kulldorff. 1961. Contributions to the theory of estimation from grouped and partially grouped samples. Almqvist & Wiksell.
  8. Anders Martin-Löf. 1963. "Utvärdering av livslängder i subnanosekundsområdet" ("Evaluation of sub-nanosecond lifetimes"). ("Sundberg formula")
  9. Per Martin-Löf. 1966. Statistics from the point of view of statistical mechanics. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula" credited to Anders Martin-Löf).
  10. Per Martin-Löf. 1970. Statistika Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Notes from seminars in the academic year 1969-1970), with the assistance of Rolf Sundberg. Stockholm University. ("Sundberg formula")
  11. Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, A. P. Dempster, D. Basu, D. R. Cox, A. W. F. Edwards, D. A. Sprott, G. A. Barnard, O. Barndorff-Nielsen, J. D. Kalbfleisch and G. Rasch and a reply by the author. Proceedings of Conference on Foundational Questions in Statistical Inference (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.
  12. Martin-Löf, Per (1974). "The notion of redundancy and its use as a quantitative measure of the discrepancy between a statistical hypothesis and a set of observational data". Scand. J. Statist. 1 (1): 3–18.
  13. 13.0 13.1 13.2 The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value). 参数是连续的,有两种类型: 一种是与所有数据点相关联的参数,另一种是与潜在变量的特定值相关联的参数(例如,与相应潜在变量具有该值的所有数据点相关联的参数)。 {{cite journal However, it is possible to apply EM to other sorts of models. 然而,将 EM 应用于其他类型的模型是可能的。 |first=C. F. Jeff |last=Wu The motive is as follows. If the value of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] is known, usually the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math] can be found by maximizing the log-likelihood over all possible values of [math]\displaystyle{ \mathbf{Z} }[/math], either simply by iterating over [math]\displaystyle{ \mathbf{Z} }[/math] or through an algorithm such as the Baum–Welch algorithm for hidden Markov models. Conversely, if we know the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math], we can find an estimate of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both [math]\displaystyle{ \boldsymbol\theta }[/math] and [math]\displaystyle{ \mathbf{Z} }[/math] are unknown: 动机如下。如果参数 < math > boldsymbol theta </math > 的值已知,通常潜变量 < math > mathbf { z } </math > 的值可以通过最大化对所有可能的 < math > mathbf { z } </math > 值的对数似然性来找到,或者通过迭代 < math > bf { z } </math > ,或者通过一个算法,如用于隐藏马尔可夫模型的 Baum-Welch 算法。相反,如果我们知道潜变量的值,我们可以很容易地找到参数的估计值 < math > 黑体符号 theta </math > ,通常只需根据相关潜变量的值对观察到的数据点进行分组,然后对每组中的点的值或值的某个函数取平均值。这表明了一种迭代算法,在 < math > 粗体字 theta </math > 和 < math > mathbf { z } </math > 未知的情况下: |title=On the Convergence Properties of the EM Algorithm First, initialize the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] to some random values. 首先,将参数“ math” > boldsymbol theta </math > 初始化为一些随机值。 |journal=Annals of Statistics Compute the probability of each possible value of [math]\displaystyle{ \mathbf{Z} }[/math] , given [math]\displaystyle{ \boldsymbol\theta }[/math]. 计算 < math > mathbf { z } </math > ,给定 < math > 粗体字 theta </math > 的每个可能值的概率。 |volume=11 Then, use the just-computed values of [math]\displaystyle{ \mathbf{Z} }[/math] to compute a better estimate for the parameters [math]\displaystyle{ \boldsymbol\theta }[/math]. 然后,使用刚刚计算的 < math > mathbf { z } </math > 来计算参数 < math > boldsymbol theta </math > 的更好的估计值。 |issue=1 Iterate steps 2 and 3 until convergence. 迭代步骤2和3直到收敛。 |date=Mar 1983 The algorithm as just described monotonically approaches a local minimum of the cost function. 上述算法单调地接近代价函数的局部极小值。 |pages=95–103 |jstor=2240463 |doi=10.1214/aos/1176346060 Speaking of an expectation (E) step is a bit of a misnomer. What are calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm. 说到期望(e)步骤有点用词不当。在第一步计算的是函数 q 固定的、与数据相关的参数。一旦 q 的参数已知,它是完全确定和最大化在第二(m)步的 EM 算法。 |mr= 684867 |doi-access=free Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates θ(t)), or applying simulated annealing methods. 虽然 EM 迭代确实增加了观测数据(即边际)的似然函数,但不能保证序列收敛于最大似然估计量。对于多模态分布,这意味着 EM 算法可能收敛到观测数据似然函数的局部极大值,这取决于起始值。各种启发式或亚启发式方法存在逃避局部最大值,如随机重启爬山(开始几个不同的随机初始估计 θ < sup > (t) ) ,或应用模拟退火方法。 }}
  14. Little, Roderick J.A.; Rubin, Donald B. (1987). Statistical Analysis with Missing Data. Wiley Series in Probability and Mathematical Statistics. New York: John Wiley & Sons. pp. 134–136. ISBN 978-0-471-80254-9. https://archive.org/details/statisticalanaly00litt. 
  15. Neal, Radford; Hinton, Geoffrey (1999). Michael I. Jordan. ed. A view of the EM algorithm that justifies incremental, sparse, and other variants. Cambridge, MA: MIT Press. pp. 355–368. ISBN 978-0-262-60032-3. ftp://ftp.cs.toronto.edu/pub/radford/emk.pdf. Retrieved 2009-03-22. 
  16. Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome (2001). "8.5 The EM algorithm". The Elements of Statistical Learning. New York: Springer. pp. 236–243. ISBN 978-0-387-95284-0. https://archive.org/details/elementsstatisti00thas_842. 
  17. Lindstrom, Mary J; Bates, Douglas M (1988). "Newton—Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data". Journal of the American Statistical Association. 83 (404): 1014. doi:10.1080/01621459.1988.10478693.
  18. Van Dyk, David A (2000). "Fitting Mixed-Effects Models Using Efficient EM-Type Algorithms". Journal of Computational and Graphical Statistics. 9 (1): 78–98. doi:10.2307/1390614. JSTOR 1390614.
  19. Diffey, S. M; Smith, A. B; Welsh, A. H; Cullis, B. R (2017). "A new REML (parameter expanded) EM algorithm for linear mixed models". Australian & New Zealand Journal of Statistics. 59 (4): 433. doi:10.1111/anzs.12208.
  20. Matarazzo, T. J., and Pakzad, S. N. (2016). “STRIDE for Structural Identification using Expectation Maximization: Iterative Output-Only Method for Modal Identification.” Journal of Engineering Mechanics.http://ascelibrary.org/doi/abs/10.1061/(ASCE)EM.1943-7889.0000951