“EM算法”的版本间的差异
(Moved page from wikipedia:en:Expectation–maximization algorithm (history)) |
(没有差异)
|
2020年5月13日 (三) 21:08的版本
此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.
在统计学中,期望最大化(EM)算法是一种在统计模型中寻找参数的最大似然或最大后验估计(MAP)的迭代法,其中模型依赖于未观测的潜变量。Em 迭代在执行期望(e)步和最大化(m)步之间交替进行,前者为使用当前参数估计计算的对数似然的期望创建一个函数,后者计算参数最大化在 e 步中找到的期望对数似然。这些参数估计然后用来确定潜变量的分布在下一步 e。
EM clustering of [[Old Faithful eruption data. The random initial model (which, due to the different scales of the axes, appears to be two very flat and wide spheres) is fit to the observed data. In the first iterations, the model changes substantially, but then converges to the two modes of the geyser. Visualized using ELKI.]]
[老忠实火山喷发数据]的 EM 聚类。随机初始模型(由于轴的不同尺度,看起来是两个非常平坦和宽的球体)适合观测数据。在第一次迭代中,模型发生了实质性的变化,但随后收敛到间歇泉的两个模态。使用 ELKI. ]
History
The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin.引用错误:没有找到与</ref>
对应的<ref>
标签 They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith.[1] A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers[2] They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers[2][3][3][4][4] following his collaboration with Per Martin-Löf and Anders Martin-Löf.[5][6][7][8][9][10][11] The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. Regardless of earlier inventions, the innovative Dempster–Laird–Rubin paper in the Journal of the Royal Statistical Society received an enthusiastic discussion at the Royal Statistical Society meeting with Sundberg calling the paper "brilliant". The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.
}}</ref> following his collaboration with Per Martin-Löf and Anders Martin-Löf. The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. Regardless of earlier inventions, the innovative Dempster–Laird–Rubin paper in the Journal of the Royal Statistical Society received an enthusiastic discussion at the Royal Statistical Society meeting with Sundberg calling the paper "brilliant". The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis.
} / ref 遵循他与 Per martin-l f 和 Anders martin-l f 的合作。 <! -- * martin-l f,p. “精确测试,置信区域和估计” ,由 a. w. f. Edwards,g. a. Barnard,d. a. Sprott,o. Barndorff-Nielsen,d. Basu 和 g. Rasch 讨论。推论统计学基本问题会议论文集(奥尔胡斯,1973) ,页。121–138.回忆录,没有。1,Dept. .照相机。印第安州 statist. 。Math. 大学。Aarhus, Aarhus, 1974.-- Dempster-Laird-Rubin 在1977年的论文中对该方法进行了推广,并对一类更广泛的问题进行了收敛性分析。在皇家统计学会与桑德伯格的会议上,不管早期的发明如何,颠覆性的 Dempster-Laird-Rubin 论文在《皇家统计学会杂志》上得到了热烈的讨论,他称这篇论文“精彩绝伦”。Dempster-Laird-Rubin 论文建立了 EM 方法作为统计分析的重要工具。
The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983.引用错误:没有找到与</ref>
对应的<ref>
标签
}}</ref>
{} / ref
Wu's proof established the EM method's convergence outside of the exponential family, as claimed by Dempster–Laird–Rubin.[12]
Wu's proof established the EM method's convergence outside of the exponential family, as claimed by Dempster–Laird–Rubin.
吴的证明建立了 EM 方法在指数族之外的收敛性,Dempster-Laird-Rubin 声称。
Introduction
The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs.
Em 算法用于在方程不能直接求解的情况下寻找统计模型的(局部)最大似然参数。除了未知参数和已知数据观测值之外,这些模型通常还涉及潜变量。也就是说,要么数据中存在缺失值,要么通过假设存在更多未观测到的数据点,可以更简单地建立模型。例如,通过假设每个观测数据点都有一个对应的未观测数据点或潜在变量,并指定每个数据点所属的混合成分,可以更简单地描述混合模型。
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation.
寻找最大似然解一般需要求解似然函数对所有未知值、参数和潜变量的导数,并同时求解结果方程。在含有潜在变量的统计模型中,这通常是不可能的。相反,其结果通常是一组联锁方程,其中参数的解决需要潜在变量的值,反之亦然,但将一组方程替换成另一组方程会产生一个无法解决的方程。
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a saddle point.[12] In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
Em 算法是从观察到有一种数值求解这两组方程的方法出发的。人们可以简单地从两组未知数中选择任意一组,用它们来估计第二组未知数,然后用这些新的值来找到第一组未知数的更好的估计,然后在两组未知数之间交替,直到最终的值都收敛到不动点。这种方法是否有效并不明显,但可以证明在这种情况下是有效的,并且可能性的导数在这一点是(任意接近)零,这反过来意味着这一点要么是一个最大值,要么是一个鞍点。一般情况下,可能会出现多个最大值,但不能保证找到全局最大值。有些可能还有奇点,即无意义的极大值。例如,在混合模型中,EM 可能找到的解之一涉及到将一个组分设置为方差为零,同一组分的均值参数等于一个数据点。
Description
Given the statistical model which generates a set [math]\displaystyle{ \mathbf{X} }[/math] of observed data, a set of unobserved latent data or missing values [math]\displaystyle{ \mathbf{Z} }[/math], and a vector of unknown parameters [math]\displaystyle{ \boldsymbol\theta }[/math], along with a likelihood function [math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta) }[/math], the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data
Given the statistical model which generates a set [math]\displaystyle{ \mathbf{X} }[/math] of observed data, a set of unobserved latent data or missing values [math]\displaystyle{ \mathbf{Z} }[/math], and a vector of unknown parameters [math]\displaystyle{ \boldsymbol\theta }[/math], along with a likelihood function [math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta) }[/math], the maximum likelihood estimate (MLE) of the unknown parameters is determined by maximizing the marginal likelihood of the observed data
给出了产生观测数据集合数学的统计模型、一组未观测到的潜在数据或缺失值数学、一个未知参数数学符号 θ / math 的向量,以及一个似然函数数学 l (x }、数学符号 θ) p (数学符号 x }、数学符号 z }) / math,通过最大化观测数据的边际似然度确定未知参数的最大似然估计(MLE)
- [math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} }[/math]
[math]\displaystyle{ L(\boldsymbol\theta; \mathbf{X}) = p(\mathbf{X}\mid\boldsymbol\theta) = \int p(\mathbf{X},\mathbf{Z} \mid \boldsymbol\theta) \, d\mathbf{Z} }[/math]
数学 l ( boldsymbol theta; mathbf { x }) p ( mathbf { x } mid boldsymbol theta) int p ( mathbf { x } , mathbf { z } mid boldsymbol theta) mathz } / math
However, this quantity is often intractable (e.g. if [math]\displaystyle{ \mathbf{Z} }[/math] is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult).
However, this quantity is often intractable (e.g. if [math]\displaystyle{ \mathbf{Z} }[/math] is a sequence of events, so that the number of values grows exponentially with the sequence length, the exact calculation of the sum will be extremely difficult).
然而,这个数量通常是难以处理的(例如:。如果 math mathbf { z } / math 是一个事件序列,所以值的数目随序列长度成指数增长,那么精确计算和将极其困难)。
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying these two steps:
Em 算法通过迭代应用这两个步骤寻找边际似然的最大似然估计:
- Expectation step (E step): Define [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] as the expected value of the log likelihood function of [math]\displaystyle{ \boldsymbol\theta }[/math], with respect to the current conditional distribution of [math]\displaystyle{ \mathbf{Z} }[/math] given [math]\displaystyle{ \mathbf{X} }[/math] and the current estimates of the parameters [math]\displaystyle{ \boldsymbol\theta^{(t)} }[/math]:
Expectation step (E step): Define [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] as the expected value of the log likelihood function of [math]\displaystyle{ \boldsymbol\theta }[/math], with respect to the current conditional distribution of [math]\displaystyle{ \mathbf{Z} }[/math] given [math]\displaystyle{ \mathbf{X} }[/math] and the current estimates of the parameters [math]\displaystyle{ \boldsymbol\theta^{(t)} }[/math]:
期望步骤(e 步) : 将 math q ( boldsymbol theta mid boldsymbol theta ^ {(t)}) / math 定义为 math boldsymbol theta / math 的对数似然函数的期望值,关于当前给定的 mathbf { z } / math 的条件分布和当前对参数的估计:
- [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z}) \right] \, }[/math]
[math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) = \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}}\left[ \log L (\boldsymbol\theta; \mathbf{X},\mathbf{Z}) \right] \, }[/math]
数学 q ( boldsymbol theta mid boldsymbol theta ^ (t)})运算符名称{ e } mid mathbf { x } , boldsymbol theta ^ ( boldsymbol theta; mathx } , z }右] ,/ math
- Maximization step (M step): Find the parameters that maximize this quantity:
Maximization step (M step): Find the parameters that maximize this quantity:
最大化步骤(m 步骤) : 找到最大化这个数量的参数:
- [math]\displaystyle{ \boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, }[/math]
[math]\displaystyle{ \boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, }[/math]
数学符号 theta ^ (t + 1)}下集{ boldsymbol theta }运算符名称{ arg,max } q ( boldsymbol theta mid boldsymbol theta ^ (t)}) ,/ math
The typical models to which EM is applied use [math]\displaystyle{ \mathbf{Z} }[/math] as a latent variable indicating membership in one of a set of groups:
The typical models to which EM is applied use [math]\displaystyle{ \mathbf{Z} }[/math] as a latent variable indicating membership in one of a set of groups:
使用 EM 的典型模型使用 math mathbf { z } / math 作为一个潜在变量,表示一组组中某一组的成员:
- The observed data points [math]\displaystyle{ \mathbf{X} }[/math] may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
The observed data points [math]\displaystyle{ \mathbf{X} }[/math] may be discrete (taking values in a finite or countably infinite set) or continuous (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations.
观察到的数据点 math mathbf { x } / math 可以是离散的(在有限或可数集中取值)或连续的(在不可数的无限集中取值)。与每个数据点相关联的可能是一个观测矢量。
- The missing values (aka latent variables) [math]\displaystyle{ \mathbf{Z} }[/math] are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.
The missing values (aka latent variables) [math]\displaystyle{ \mathbf{Z} }[/math] are discrete, drawn from a fixed number of values, and with one latent variable per observed unit.
缺失值(又称潜在变量) math mathbf { z } / math 是离散的,从固定数量的值中提取,每个观察单位有一个潜在变量。
- The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).
The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and those associated with a specific value of a latent variable (i.e., associated with all data points which corresponding latent variable has that value).
这些参数是连续的,有两种类型: 一种是与所有数据点相关联的参数,另一种是与潜在变量的特定值相关联的参数(即与相应潜在变量具有该值的所有数据点相关联的参数)。
However, it is possible to apply EM to other sorts of models.
However, it is possible to apply EM to other sorts of models.
然而,将 EM 应用于其他类型的模型是可能的。
The motive is as follows. If the value of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] is known, usually the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math] can be found by maximizing the log-likelihood over all possible values of [math]\displaystyle{ \mathbf{Z} }[/math], either simply by iterating over [math]\displaystyle{ \mathbf{Z} }[/math] or through an algorithm such as the Baum–Welch algorithm for hidden Markov models. Conversely, if we know the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math], we can find an estimate of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both [math]\displaystyle{ \boldsymbol\theta }[/math] and [math]\displaystyle{ \mathbf{Z} }[/math] are unknown:
The motive is as follows. If the value of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] is known, usually the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math] can be found by maximizing the log-likelihood over all possible values of [math]\displaystyle{ \mathbf{Z} }[/math], either simply by iterating over [math]\displaystyle{ \mathbf{Z} }[/math] or through an algorithm such as the Baum–Welch algorithm for hidden Markov models. Conversely, if we know the value of the latent variables [math]\displaystyle{ \mathbf{Z} }[/math], we can find an estimate of the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both [math]\displaystyle{ \boldsymbol\theta }[/math] and [math]\displaystyle{ \mathbf{Z} }[/math] are unknown:
动机如下。如果参数 math boldsymbol theta / math 的值是已知的,通常可以通过对所有可能的 math mathbf { z } / math 的值最大化对数似然来找到潜变量 math mathbf { z } / math 的值,可以简单地通过迭代 math z } / math 或通过隐马尔可夫模型的算法如 Welch 算法。相反,如果我们知道潜变量 math z } / math 的值,我们可以相当容易地找到参数估计的 math boldsymbol theta / math,通常只需根据相关潜变量的值对观察到的数据点进行分组,并对每组中的点的值或值的某些函数取平均值。这就提出了一种迭代算法,在这种情况下,math boldsymbol theta / math 和 math mathbf { z } / math 都是未知的:
- First, initialize the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] to some random values.
First, initialize the parameters [math]\displaystyle{ \boldsymbol\theta }[/math] to some random values.
首先,将参数 math boldsymbol theta / math 初始化为一些随机值。
- Compute the probability of each possible value of [math]\displaystyle{ \mathbf{Z} }[/math] , given [math]\displaystyle{ \boldsymbol\theta }[/math].
Compute the probability of each possible value of [math]\displaystyle{ \mathbf{Z} }[/math] , given [math]\displaystyle{ \boldsymbol\theta }[/math].
计算给定数学黑体符号 theta / math 的每个可能值的概率。
- Then, use the just-computed values of [math]\displaystyle{ \mathbf{Z} }[/math] to compute a better estimate for the parameters [math]\displaystyle{ \boldsymbol\theta }[/math].
Then, use the just-computed values of [math]\displaystyle{ \mathbf{Z} }[/math] to compute a better estimate for the parameters [math]\displaystyle{ \boldsymbol\theta }[/math].
然后,使用 math z } / math 的刚计算值计算出参数 math boldsymbol theta / math 的更好估计。
- Iterate steps 2 and 3 until convergence.
Iterate steps 2 and 3 until convergence.
迭代步骤2和3直到收敛。
The algorithm as just described monotonically approaches a local minimum of the cost function.
The algorithm as just described monotonically approaches a local minimum of the cost function.
刚才描述的算法单调地接近代价函数的局部极小值。
Properties
Speaking of an expectation (E) step is a bit of a misnomer. What are calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
Speaking of an expectation (E) step is a bit of a misnomer. What are calculated in the first step are the fixed, data-dependent parameters of the function Q. Once the parameters of Q are known, it is fully determined and is maximized in the second (M) step of an EM algorithm.
说到期望(e)步骤有点用词不当。在第一步计算的是函数 q 固定的、与数据相关的参数。一旦 q 的参数已知,它是完全确定的,并在第二(m)步的 EM 算法的最大化。
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates θ(t)), or applying simulated annealing methods.
Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates θ(t)), or applying simulated annealing methods.
虽然 EM 迭代确实增加了观测数据(即边际)的似然函数,但不能保证序列收敛于最大似然估计。对于多模态分布,这意味着 EM 算法可能收敛到观测数据似然函数的局部极大值,这取决于起始值。各种启发式或超启发式方法存在逃避局部最大值,如随机重启爬山(从几个不同的随机初始估计 sup (t) / sup 开始) ,或应用模拟退火方法。
EM is especially useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).[3][4][7][8][9][10][11]
EM is especially useful when the likelihood is an exponential family: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of Per Martin-Löf and Anders Martin-Löf).
当可能性是指数族时,EM 特别有用: e 步成为充分统计量的期望和,m 步涉及最大化线性函数。在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发表,使用 Per martin-l f 和 Anders martin-l f 的未发表结果)推导出每个步骤的解析解更新。
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin.
在 Dempster、 Laird 和 Rubin 的原始论文中,EM 方法被修改为计算最大后验概率估计贝叶斯推断。
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
还有一些其他的方法可以找到最大似然估计,比如梯度下降法,共轭梯度,或者 Gauss-Newton 算法的变体。与 EM 不同,这类方法通常需要对似然函数的一阶和 / 或二阶导数求值。
Proof of correctness
Expectation-maximization works to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] rather than directly improving [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math]. Here is shown that improvements to the former imply improvements to the latter.[13]
Expectation-maximization works to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] rather than directly improving [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math]. Here is shown that improvements to the former imply improvements to the latter.
期望-最大化可以改进数学 q ( boldsymbol theta mid theta ^ (t)) / math,而不是直接改进 math log p ( mathbf mid boldsymbol theta) / math。这里显示对前者的改进意味着对后者的改进。
For any [math]\displaystyle{ \mathbf{Z} }[/math] with non-zero probability [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) }[/math], we can write
For any [math]\displaystyle{ \mathbf{Z} }[/math] with non-zero probability [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) }[/math], we can write
对于任何带有非零概率数学 p ( mathbf { z } mid mathbf { x } , boldsymbol theta) / math,我们可以编写
- [math]\displaystyle{ \lt math\gt 数学 \log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \,. \log p(\mathbf{X}\mid\boldsymbol\theta) = \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) - \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \,. 1. log p ( mathbf { x } mid boldsymbol theta) log p ( mathbf { x } , mathbf { z } mid boldsymbol theta)- log p ( mathbf { z } mid mathbf { x } , boldsymbol theta) ,. }[/math]
</math>
数学
We take the expectation over possible values of the unknown data [math]\displaystyle{ \mathbf{Z} }[/math] under the current parameter estimate [math]\displaystyle{ \theta^{(t)} }[/math] by multiplying both sides by [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) }[/math] and summing (or integrating) over [math]\displaystyle{ \mathbf{Z} }[/math]. The left-hand side is the expectation of a constant, so we get:
We take the expectation over possible values of the unknown data [math]\displaystyle{ \mathbf{Z} }[/math] under the current parameter estimate [math]\displaystyle{ \theta^{(t)} }[/math] by multiplying both sides by [math]\displaystyle{ p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) }[/math] and summing (or integrating) over [math]\displaystyle{ \mathbf{Z} }[/math]. The left-hand side is the expectation of a constant, so we get:
在当前参数估计 math theta ^ {(t)} / math 下,我们用 math p (mathbf { z } mid mathbf { x } ,boldsymbol theta ^ {(t)} / math 和(或积分)在 mathbf { z } / math 上乘以对未知数 math { z } / math 可能值的期望值。左边是一个常数的期望值,所以我们得到:
- [math]\displaystyle{ \lt math\gt 数学 \begin{align} \begin{align} Begin { align } \log p(\mathbf{X}\mid\boldsymbol\theta) & \log p(\mathbf{X}\mid\boldsymbol\theta) & Log p ( mathbf { x } mid boldsymbol theta) & = \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) = \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{X},\mathbf{Z}\mid\boldsymbol\theta) [数学][数学][数学][数学][数学][数学][数学][数学][数学][数学]][数学][数学][数学][数学][数学][数学]][数学][数学]][数学][数学]][数学][数学][数学][数学][数学]][数学][数学][数学][数学][数学][数学][ - \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\ - \sum_{\mathbf{Z}} p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta^{(t)}) \log p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta) \\ - sum { z } p ( mathbf { z } mid mathbf { x } , boldsymbol theta ^ (t)}) log p ( mathbf { z } mid mathbf { x } , boldsymbol theta) & = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \,, & = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \,, & q ( boldsymbol theta mid boldsymbol theta ^ {(t)}) + h ( boldsymbol theta mid boldsymbol theta ^ (t)}) , \end{align} \end{align} end { align } }[/math]
</math>
数学
where [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] is defined by the negated sum it is replacing.
where [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] is defined by the negated sum it is replacing.
其中 math h ( boldsymbol theta mid boldsymbol theta ^ {(t)}) / math 由它所替换的负和定义。
This last equation holds for every value of [math]\displaystyle{ \boldsymbol\theta }[/math] including [math]\displaystyle{ \boldsymbol\theta = \boldsymbol\theta^{(t)} }[/math],
This last equation holds for every value of [math]\displaystyle{ \boldsymbol\theta }[/math] including [math]\displaystyle{ \boldsymbol\theta = \boldsymbol\theta^{(t)} }[/math],
最后一个方程适用于 math boldsymbol theta / math 的每个值,包括 math boldsymbol theta boldsymbol theta ^ {(t)} / math,
- [math]\displaystyle{ \lt math\gt 数学 \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) Log p ( mathbf { x } mid boldsymbol theta ^ {(t)}) = Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,, = Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) + H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,, Q ( boldsymbol theta ^ {(t)} mid boldsymbol theta ^ {(t)}) + h ( boldsymbol theta ^ {(t)} mid boldsymbol theta ^ (t)}) , }[/math]
</math>
数学
and subtracting this last equation from the previous equation gives
and subtracting this last equation from the previous equation gives
然后从前一个方程中减去最后一个方程
- [math]\displaystyle{ \lt math\gt 数学 \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) Log p ( mathbf { x } mid boldsymbol theta)- log p ( mathbf { x } mid boldsymbol theta (t)}) = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) = Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) Q ( boldsymbol theta mid boldsymbol theta ^ {(t)})-q ( boldsymbol theta ^ {(t)} mid boldsymbol theta ^ (t)}) + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,, + H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,, + h ( boldsymbol theta mid boldsymbol theta ^ {(t)})-h ( boldsymbol theta ^ (t)} mid boldsymbol theta ^ (t)}) , }[/math]
</math>
数学
However, Jensen's inequality tells us that [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) }[/math], so we can conclude that
However, Jensen's inequality tells us that [math]\displaystyle{ H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \ge H(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) }[/math], so we can conclude that
然而,Jensen 不等式告诉我们,数学 h ( boldsymbol theta mid theta ^ (t)}) ge h ( boldsymbol theta ^ (t)} mid boldsymbol theta ^ (t)}) / math,所以我们可以得出这样的结论:
- [math]\displaystyle{ \lt math\gt 数学 \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) \log p(\mathbf{X}\mid\boldsymbol\theta) - \log p(\mathbf{X}\mid\boldsymbol\theta^{(t)}) Log p ( mathbf { x } mid boldsymbol theta)- log p ( mathbf { x } mid boldsymbol theta (t)}) \ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,. \ge Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) - Q(\boldsymbol\theta^{(t)}\mid\boldsymbol\theta^{(t)}) \,. Ge q ( boldsymbol theta mid boldsymbol theta ^ {(t)})-q ( boldsymbol theta ^ (t)} mid boldsymbol theta ^ (t)}) ,. }[/math]
</math>
数学
In words, choosing [math]\displaystyle{ \boldsymbol\theta }[/math] to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] causes [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math] to improve at least as much.
In words, choosing [math]\displaystyle{ \boldsymbol\theta }[/math] to improve [math]\displaystyle{ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) }[/math] causes [math]\displaystyle{ \log p(\mathbf{X}\mid\boldsymbol\theta) }[/math] to improve at least as much.
换句话说,选择 math boldsymbol theta / math 来改进数学 q ( boldsymbol theta mid theta ^ (t)) / math 导致 math log p ( mathbf mid boldsymbol theta) / math 改进至少同样多。
As a maximization–maximization procedure
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate ascent.[14][15] Consider the function:
The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate ascent. Consider the function:
Em 算法可以看作是两个交替的最大化步骤,即作为协调上升的一个例子。考虑一下功能:
- [math]\displaystyle{ F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), }[/math]
[math]\displaystyle{ F(q,\theta) := \operatorname{E}_q [ \log L (\theta ; x,Z) ] + H(q), }[/math]
Math f (q, theta) : operatorname { e } q [ log l ( theta; x,z)] + h (q) ,/ math
where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as
where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as
其中 q 是未观测数据 z 上的任意概率分布,h (q)是分布 q 的熵。这个函数可以写成
- [math]\displaystyle{ F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), }[/math]
[math]\displaystyle{ F(q,\theta) = -D_{\mathrm{KL}}\big(q \parallel p_{Z\mid X}(\cdot\mid x;\theta ) \big) + \log L(\theta;x), }[/math]
Math f (q,theta)-d mathrm { KL } big (q parallel p { z mid x; theta) big) + log l ( theta; x) ,/ math
where [math]\displaystyle{ p_{Z\mid X}(\cdot\mid x;\theta ) }[/math] is the conditional distribution of the unobserved data given the observed data [math]\displaystyle{ x }[/math] and [math]\displaystyle{ D_{KL} }[/math] is the Kullback–Leibler divergence.
where [math]\displaystyle{ p_{Z\mid X}(\cdot\mid x;\theta ) }[/math] is the conditional distribution of the unobserved data given the observed data [math]\displaystyle{ x }[/math] and [math]\displaystyle{ D_{KL} }[/math] is the Kullback–Leibler divergence.
其中数学 p { z mid x }( cdot mid x; theta) / math 是给定观测数据的数学 x / math 和数学 d { KL } / math 的未观测数据的条件分布,称为 Kullback-Leibler 散度。
Then the steps in the EM algorithm may be viewed as:
Then the steps in the EM algorithm may be viewed as:
然后 EM 算法中的步骤可以被看作:
- Expectation step: Choose [math]\displaystyle{ q }[/math] to maximize [math]\displaystyle{ F }[/math]:
Expectation step: Choose [math]\displaystyle{ q }[/math] to maximize [math]\displaystyle{ F }[/math]:
期望步骤: 选择 math q / math 来最大化 math f / math:
- [math]\displaystyle{ q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) }[/math]
[math]\displaystyle{ q^{(t)} = \operatorname{arg\,max}_q \ F(q,\theta^{(t)}) }[/math]
数学 q ^ {(t)} operatorname { arg ,max } q f (q, theta ^ {(t)}) / math
- Maximization step: Choose [math]\displaystyle{ \theta }[/math] to maximize [math]\displaystyle{ F }[/math]:
Maximization step: Choose [math]\displaystyle{ \theta }[/math] to maximize [math]\displaystyle{ F }[/math]:
最大化步骤: 选择 math theta / math 来最大化 math f / math:
- [math]\displaystyle{ \theta^{(t+1)} = \operatorname{arg\,max}_\theta \ F(q^{(t)},\theta) }[/math]
[math]\displaystyle{ \theta^{(t+1)} = \operatorname{arg\,max}_\theta \ F(q^{(t)},\theta) }[/math]
Math theta ^ {(t + 1)} operatorname { arg ,max } theta f (q ^ {(t)} , theta) / math
Applications
EM is frequently used for data clustering in machine learning and computer vision. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars.
EM is frequently used for data clustering in machine learning and computer vision. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars.
Em 算法是机器学习和计算机视觉中常用的数据聚类算法。在自然语言处理中,该算法的两个突出实例是用于隐马尔可夫模型的 Baum-Welch 算法和用于概率上下文无关文法的无监督归纳的内外部算法。
EM is frequently used for parameter estimation of mixed models,[16][17] notably in quantitative genetics.[18]
EM is frequently used for parameter estimation of mixed models, notably in quantitative genetics.
Em 经常用于混合模型的参数估计,尤其是在数量遗传学。
In psychometrics, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models.
In psychometrics, EM is almost indispensable for estimating item parameters and latent abilities of item response theory models.
在心理测量学中,EM 对于估计项目反应理论模型的项目参数和潜在能力几乎是不可或缺的。
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.[citation needed]
With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio.
由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的工具,价格和管理投资组合的风险。
The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM.
The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM.
Em 算法(及其快速变化的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射计算机断层扫描、单光子发射 X射线计算机断层成像和 X射线计算机断层成像中。下面是 EM 的其他更快的变体。
In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) [19] algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis).
In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis).
在结构工程中,利用期望最大化(STRIDE)算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)。
Filtering and smoothing EM algorithms
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems.
卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批处理状态估计。然而,这些最小方差解需要状态空间模型参数的估计。Em 算法可用于求解状态和参数的联合估计问题。
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
Filtering and smoothing EM algorithms arise by repeating this two-step procedure:
滤波和平滑 EM 算法是通过重复这两个步骤产生的:
- E-step
E-step
E-step
- Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates.
操作 Kalman 滤波器或最小方差平滑设计与当前的参数估计,以获得更新的状态估计。
- M-step
M-step
M-step
- Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates.
使用最大似然计算中的滤波或平滑状态估计来获得更新的参数估计。
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation
假设卡尔曼滤波器或最小方差平滑器对具有附加白噪声的单输入单输出系统进行测量。通过极大似然估计,可以得到更新的测量噪声方差估计
- [math]\displaystyle{ \widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2 }[/math]
[math]\displaystyle{ \widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2 }[/math]
数学广义标准差 ^ 2 v frac {1}{ n }{ k 1} ^ n {(z k-(广义标准差)}} ^ 2 / math
where [math]\displaystyle{ \widehat{x}_k }[/math] are scalar output estimates calculated by a filter or a smoother from N scalar measurements [math]\displaystyle{ z_k }[/math]. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
where [math]\displaystyle{ \widehat{x}_k }[/math] are scalar output estimates calculated by a filter or a smoother from N scalar measurements [math]\displaystyle{ z_k }[/math]. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by
其中 math widehat { x } k / math 是由一个过滤器或 n 个标量测量的平滑器计算的标量输出估计值。上述更新也可以应用于泊松测量噪声强度的更新。同样,对于一阶自回归过程,更新后的过程噪声方差估计可以通过
- [math]\displaystyle{ \widehat{\sigma}^2_w = \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat{{x}}_k)}^2 }[/math]
[math]\displaystyle{ \widehat{\sigma}^2_w = \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat_k)}^2 }[/math]
数学广义 σ ^ 2 w | frac {1}{ n }{ k 1} ^ n {((广义 σ){ k + 1}-(广义 σ){ k)}} ^ 2 / math
where [math]\displaystyle{ \widehat{x}_k }[/math] and [math]\displaystyle{ \widehat{x}_{k+1} }[/math] are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
where [math]\displaystyle{ \widehat{x}_k }[/math] and [math]\displaystyle{ \widehat{x}_{k+1} }[/math] are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via
其中 math 和 math 是由滤子或平滑器计算的标量状态估计。通过对模型系数的修正,得到了模型系数的修正估计
- [math]\displaystyle{ \widehat{F} = \frac{\sum_{k=1}^N (\widehat{x}_{k+1}-\widehat{F} \widehat{x}_k)}{\sum_{k=1}^N \widehat{x}_k^2}. }[/math]
[math]\displaystyle{ \widehat{F} = \frac{\sum_{k=1}^N (\widehat{x}_{k+1}-\widehat{F} \widehat{x}_k)}{\sum_{k=1}^N \widehat{x}_k^2}. }[/math]
广义数学{ f } frac { k 1} ^ n ( widehat { k + 1}- widehat { f } k)}{ k 1} ^ n widehat { x } ^ 2}。数学
The convergence of parameter estimates such as those above are well studied.引用错误:没有找到与</ref>
对应的<ref>
标签[20][21][22][23][24][25]
}}</ref>
{} / ref
Variants
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson).[26] Also, EM can be used with constrained estimation methods.
A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods.
为了提高 EM 算法的收敛速度,人们提出了许多方法,如共轭梯度法和修正牛顿法。此外,EM 还可以与约束估计方法结合使用。
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".[27]
Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data".
参数扩展期望最大化(PX-EM)算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed.[28] Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.[29]
Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm.
期望条件最大化(ECM)用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数子 i / 子单独最大化,条件是其他参数保持不变。本身也可以扩展为期望条件最大化(ECME)算法。
This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization-maximization procedure section.[14] GEM is further developed in a distributed environment and shows promising results.引用错误:没有找到与</ref>
对应的<ref>
标签
</ref>
/ 参考
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm,[30] and therefore use any machinery developed in the more general case.
It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.
也可以考虑将 EM 算法作为 MM (majorize / minize 或 minorize / maximize,取决于上下文)算法的子类,因此可以使用在更一般情况下开发的任何机制。
α-EM algorithm
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm
The Q-function used in the EM algorithm is based on the log likelihood. Therefore, it is regarded as the log-EM algorithm. The use of the log likelihood can be generalized to that of the α-log likelihood ratio. Then, the α-log likelihood ratio of the observed data can be exactly expressed as equality by using the Q-function of the α-log likelihood ratio and the α-divergence. Obtaining this Q-function is a generalized E step. Its maximization is a generalized M step. This pair is called the α-EM algorithm
Em 算法中使用的 q 函数是基于对数似然的。因此,该算法被称为 log-EM 算法。对数似然的使用可以推广到对数似然比的使用。然后,利用对数似然比和散度的 q 函数,将观测数据的对数似然比精确地表示为等式。获得这个 q 函数是一个广义的 e 步。它的最大化是一个广义的 m 步。这一对称为 -em 算法
引用错误:没有找到与</ref>
对应的<ref>
标签
</ref>
/ 参考
which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
which contains the log-EM algorithm as its subclass. Thus, the α-EM algorithm by Yasuo Matsuyama is an exact generalization of the log-EM algorithm. No computation of gradient or Hessian matrix is needed. The α-EM shows faster convergence than the log-EM algorithm by choosing an appropriate α. The α-EM algorithm leads to a faster version of the Hidden Markov model estimation algorithm α-HMM.
它包含 log-EM 算法作为其子类。因此,Matsuyama 提出的 -em 算法是 log-EM 算法的精确推广。不需要计算梯度或 Hessian 矩阵。与 log-EM 算法相比,em 算法通过选择合适的参数,具有更快的收敛速度。Em 算法是隐马尔可夫模型估计算法的一个更快的版本-hmm。
引用错误:没有找到与</ref>
对应的<ref>
标签
</ref>
/ 参考
Relation to variational Bayes methods
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
Em 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了潜在变量的概率分布(贝叶斯风格)和点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本的这可能是需要的,给一个概率分布和潜在变量。贝叶斯推理方法简单地将其视为另一个潜在变量。在这个范例中,e 和 m 步骤之间的区别就消失了。如果使用上述的分解 q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括) ,并优化他们一次一个。现在,每次迭代需要 k 步,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 q 只依赖于它的马尔可夫包层,所以局部消息传递可以用于有效的推理。
Geometric interpretation
In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
在信息几何中,e 步和 m 步被解释为双仿射联系下的投影,称为 e 联系和 m 联系; Kullback-Leibler 分歧也可以用这些术语来理解。
Examples
Gaussian mixture
Gaussian mixture
高斯混合! -- 这一部分是从矩阵微积分中连接起来的 --
k-means and EM on artificial data visualized with ELKI. Using the variances, the EM algorithm can describe the normal distributions exactly, while k-means splits the data in Voronoi-cells. The cluster center is indicated by the lighter, bigger symbol.]]
基于 ELKI 的人工数据的 k 均值和 EM 可视化。Em 算法利用方差精确地描述正态分布,而 k- 均值算法利用 voronoi 单元进行数据分割。集群中心由较轻,较大的符号表示。]
An animation demonstrating the EM algorithm fitting a two component Gaussian mixture model to the Old Faithful dataset. The algorithm steps through from a random initialization to convergence.
一个动画演示的 EM 算法拟合一个双分量高斯[混合模型的老忠实数据集。该算法步骤从一个随机初始化到收敛。]
Let [math]\displaystyle{ \mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n) }[/math] be a sample of [math]\displaystyle{ n }[/math] independent observations from a mixture of two multivariate normal distributions of dimension [math]\displaystyle{ d }[/math], and let [math]\displaystyle{ \mathbf{z} = (z_1,z_2,\ldots,z_n) }[/math] be the latent variables that determine the component from which the observation originates.[15]
Let [math]\displaystyle{ \mathbf{x} = (\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_n) }[/math] be a sample of [math]\displaystyle{ n }[/math] independent observations from a mixture of two multivariate normal distributions of dimension [math]\displaystyle{ d }[/math], and let [math]\displaystyle{ \mathbf{z} = (z_1,z_2,\ldots,z_n) }[/math] be the latent variables that determine the component from which the observation originates.
设 math { x }( mathbf { x }1, mathbf { x }2, ldots, mathbf { x } n) / math 是来自维数 d / math 的两个多元正态分布的混合物的数学 n / 数学独立观测的样本,并且 math { z }(z 1,z 2, ldots,z n) / math 是确定观测所依据的潜变量的起源成分。
- [math]\displaystyle{ X_i \mid(Z_i = 1) \sim \mathcal{N}_d(\boldsymbol{\mu}_1,\Sigma_1) }[/math] and [math]\displaystyle{ X_i \mid(Z_i = 2) \sim \mathcal{N}_d(\boldsymbol{\mu}_2,\Sigma_2) }[/math]
[math]\displaystyle{ X_i \mid(Z_i = 1) \sim \mathcal{N}_d(\boldsymbol{\mu}_1,\Sigma_1) }[/math] and [math]\displaystyle{ X_i \mid(Z_i = 2) \sim \mathcal{N}_d(\boldsymbol{\mu}_2,\Sigma_2) }[/math]
数学 xi mid (z i 1) sim mathcal n } d ( boldsymbol { mu }1, Sigma 1) / math 和 math xi mid (z i 2) sim mathcal n } d ( boldsymbol { mu }2, Sigma 2) / math
where
where
在哪里
- [math]\displaystyle{ \operatorname{P} (Z_i = 1 ) = \tau_1 \, }[/math] and [math]\displaystyle{ \operatorname{P} (Z_i=2) = \tau_2 = 1-\tau_1 }[/math]
[math]\displaystyle{ \operatorname{P} (Z_i = 1 ) = \tau_1 \, }[/math] and [math]\displaystyle{ \operatorname{P} (Z_i=2) = \tau_2 = 1-\tau_1 }[/math]
数学运算名{ p }(z i 1) tau 1,/ math 和 math 运算名{ p }(z i 2) tau 21- tau 1 / math
The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:
The aim is to estimate the unknown parameters representing the mixing value between the Gaussians and the means and covariances of each:
其目的是估计代表高斯函数与每个函数的均值和协方差之间的混合值的未知参数:
- [math]\displaystyle{ \theta = \big( \boldsymbol{\tau},\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\Sigma_1,\Sigma_2 \big) }[/math]
[math]\displaystyle{ \theta = \big( \boldsymbol{\tau},\boldsymbol{\mu}_1,\boldsymbol{\mu}_2,\Sigma_1,\Sigma_2 \big) }[/math]
Math theta big ( boldsymbol { tau, boldsymbol { mu }1, boldsymbol { mu }2, Sigma 1, sigma2 big) / math
where the incomplete-data likelihood function is
where the incomplete-data likelihood function is
不完全数据似然函数在哪里
- [math]\displaystyle{ L(\theta;\mathbf{x}) = \prod_{i=1}^n \sum_{j=1}^2 \tau_j \ f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) }[/math],
[math]\displaystyle{ L(\theta;\mathbf{x}) = \prod_{i=1}^n \sum_{j=1}^2 \tau_j \ f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) }[/math],
数学 l ( theta; mathbf { x }) prod { i ^ n sum { j 1} ^ 2 tau j f ( mathbf { x } i; boldsymbol { mu } j, Sigma j) / math,
and the complete-data likelihood function is
and the complete-data likelihood function is
完全数据似然函数是
- [math]\displaystyle{ L(\theta;\mathbf{x},\mathbf{z}) = p(\mathbf{x},\mathbf{z} \mid \theta) = \prod_{i=1}^n \prod_{j=1}^2 \ [f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) \tau_j] ^{\mathbb{I}(z_i=j)} }[/math]
[math]\displaystyle{ L(\theta;\mathbf{x},\mathbf{z}) = p(\mathbf{x},\mathbf{z} \mid \theta) = \prod_{i=1}^n \prod_{j=1}^2 \ [f(\mathbf{x}_i;\boldsymbol{\mu}_j,\Sigma_j) \tau_j] ^{\mathbb{I}(z_i=j)} }[/math]
数学 l (theta; mathbf { x } ,mathbf { z }) p (mathbf { x } ,mathbf { z } mid theta) prod ^ n prod ^ 2[ f (mathbf { x } i; boldsymbol { mu } ,Sigma j){ mathbb { i }(z i j)} / math
or
or
或
- [math]\displaystyle{ L(\theta;\mathbf{x},\mathbf{z}) = \exp \left\{ \sum_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] \right\}. }[/math]
[math]\displaystyle{ L(\theta;\mathbf{x},\mathbf{z}) = \exp \left\{ \sum_{i=1}^n \sum_{j=1}^2 \mathbb{I}(z_i=j) \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] \right\}. }[/math]
数学 l (theta; mathbf { x } , mathbf { z }) exp sum { i } ^ 2 mathbb { i }(z i j) big [ log tau j-tfrac } | | |-Sigma | tfrac {1}(mathbf { i-mu-boldsymbol { j)) top Sigma j-1}(mathi-boldsymbol { j)-framu-2)-big [ log | 2]。数学
where [math]\displaystyle{ \mathbb{I} }[/math] is an indicator function and [math]\displaystyle{ f }[/math] is the probability density function of a multivariate normal.
where [math]\displaystyle{ \mathbb{I} }[/math] is an indicator function and [math]\displaystyle{ f }[/math] is the probability density function of a multivariate normal.
其中 math mathbb { i } / math 是指示函数,而 math f / math 是多元正态的概率密度函数。
In the last equality, for each i, one indicator [math]\displaystyle{ \mathbb{I}(z_i=j) }[/math] is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
In the last equality, for each , one indicator [math]\displaystyle{ \mathbb{I}(z_i=j) }[/math] is equal to zero, and one indicator is equal to one. The inner sum thus reduces to one term.
在最后一个等式中,对于每个等式,一个指示符 math mathbb { i }(z i j) / math 等于零,一个指示符等于一。因此,内和归结为一项。
E step
Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:
Given our current estimate of the parameters θ(t), the conditional distribution of the Zi is determined by Bayes theorem to be the proportional height of the normal density weighted by τ:
给出了参数 sup (t) / sup 的当前估计,z 子 i / sub 的条件分布用 Bayes 定理确定为正态密度的比例高度,加权值为:
- [math]\displaystyle{ T_{j,i}^{(t)} := \operatorname{P}(Z_i=j \mid X_i=\mathbf{x}_i ;\theta^{(t)}) = \frac{\tau_j^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_j^{(t)},\Sigma_j^{(t)})}{\tau_1^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_1^{(t)},\Sigma_1^{(t)}) + \tau_2^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_2^{(t)},\Sigma_2^{(t)})} }[/math].
[math]\displaystyle{ T_{j,i}^{(t)} := \operatorname{P}(Z_i=j \mid X_i=\mathbf{x}_i ;\theta^{(t)}) = \frac{\tau_j^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_j^{(t)},\Sigma_j^{(t)})}{\tau_1^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_1^{(t)},\Sigma_1^{(t)}) + \tau_2^{(t)} \ f(\mathbf{x}_i;\boldsymbol{\mu}_2^{(t)},\Sigma_2^{(t)})} }[/math].
句子太长,请短一点。
These are called the "membership probabilities" which are normally considered the output of the E step (although this is not the Q function of below).
These are called the "membership probabilities" which are normally considered the output of the E step (although this is not the Q function of below).
这些被称为“成员概率” ,通常被认为是 e 步的输出(尽管这不是下面的 q 函数)。
This E step corresponds with setting up this function for Q:
This E step corresponds with setting up this function for Q:
这个 e 步对应于为 q 设置这个函数:
- [math]\displaystyle{ \begin{align}Q(\theta\mid\theta^{(t)}) \lt math\gt \begin{align}Q(\theta\mid\theta^{(t)}) 开始{ align } q ( theta mid theta ^ {(t)}) &= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x},\mathbf{Z}) ] \\ &= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x},\mathbf{Z}) ] \\ 运算器名称{ e }{ z } mid mathbf { x } , maththeta }[ log l ( theta; mathbf { x } , mathbf { z }]] &= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log \prod_{i=1}^{n}L(\theta;\mathbf{x}_i,Z_i) ] \\ &= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\log \prod_{i=1}^{n}L(\theta;\mathbf{x}_i,Z_i) ] \\ 运算器名称{ e }{ z } mid mathbf { x } , maththeta } ^ {(t)}[ log prod ^ { n } l ( theta; mathbf { x } i,z i)] &= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\sum_{i=1}^n \log L(\theta;\mathbf{x}_i,Z_i) ] \\ &= \operatorname{E}_{\mathbf{Z}\mid\mathbf{X},\mathbf{\theta}^{(t)}} [\sum_{i=1}^n \log L(\theta;\mathbf{x}_i,Z_i) ] \\ 运算器名称{ e }{ z } mid mathf { x } , mathf { t } ^ {(t)}}[ sum { i ^ n log l ( theta; mathf { x } i,z i)] &= \sum_{i=1}^n\operatorname{E}_{Z_i\mid\mathbf{X};\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x}_i,Z_i) ] \\ &= \sum_{i=1}^n\operatorname{E}_{Z_i\mid\mathbf{X};\mathbf{\theta}^{(t)}} [\log L(\theta;\mathbf{x}_i,Z_i) ] \\ [ log l ( theta; mathbf { x } i,z i)] &= \sum_{i=1}^n \sum_{j=1}^2 P(Z_i =j \mid X_i = \mathbf{x}_i; \theta^{(t)}) \log L(\theta_j;\mathbf{x}_i,j) \\ &= \sum_{i=1}^n \sum_{j=1}^2 P(Z_i =j \mid X_i = \mathbf{x}_i; \theta^{(t)}) \log L(\theta_j;\mathbf{x}_i,j) \\ (z) ^ n ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (t) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (x) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (z) ^ 2 p (t) &= \sum_{i=1}^n \sum_{j=1}^2 T_{j,i}^{(t)} \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] &= \sum_{i=1}^n \sum_{j=1}^2 T_{j,i}^{(t)} \big[ \log \tau_j -\tfrac{1}{2} \log |\Sigma_j| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_j)^\top\Sigma_j^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_j) -\tfrac{d}{2} \log(2\pi) \big] 2 t,i } ^ (t) ^ (t) ^ (t) ^ (t) ^ (t) ^ (t) ^ (t) ^ (t))-(t) ^ (t)-(t) ^ (t) ^ (t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t)-(t) \end{align} }[/math]
\end{align}</math>
End { align } / math
The expectation of [math]\displaystyle{ \log L(\theta;\mathbf{x}_i,Z_i) }[/math] inside the sum is taken with respect to the probability density function [math]\displaystyle{ P(Z_i \mid X_i = \mathbf{x}_i; \theta^{(t)}) }[/math], which might be different for each [math]\displaystyle{ \mathbf{x}_i }[/math] of the training set. Everything in the E step is known before the step is taken except [math]\displaystyle{ T_{j,i} }[/math], which is computed according to the equation at the beginning of the E step section.
The expectation of [math]\displaystyle{ \log L(\theta;\mathbf{x}_i,Z_i) }[/math] inside the sum is taken with respect to the probability density function [math]\displaystyle{ P(Z_i \mid X_i = \mathbf{x}_i; \theta^{(t)}) }[/math], which might be different for each [math]\displaystyle{ \mathbf{x}_i }[/math] of the training set. Everything in the E step is known before the step is taken except [math]\displaystyle{ T_{j,i} }[/math], which is computed according to the equation at the beginning of the E step section.
求和中的 math log l ( theta; mathbf { x } i,z i) / math 的期望取自于概率密度函数数学 p (zmid x } i; theta ^ {(t)}) / math,这对于训练集中的每个 mathbf { x } / math 可能是不同的。除了 math t { j,i } / math 之外,e 步中的所有内容在执行之前都是已知的,math 是根据 e 步段开头的方程计算的。
This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.
This full conditional expectation does not need to be calculated in one step, because τ and μ/Σ appear in separate linear terms and can thus be maximized independently.
这个完整的条件期望不需要在一个步骤中计算,因为和 / 以单独的线性项出现,因此可以独立地最大化。
M step
Q(θ | θ(t)) being quadratic in form means that determining the maximizing values of θ is relatively straightforward. Also, τ, (μ1,Σ1) and (μ2,Σ2) may all be maximized independently since they all appear in separate linear terms.
Q(θ | θ(t)) being quadratic in form means that determining the maximizing values of θ is relatively straightforward. Also, τ, (μ1,Σ1) and (μ2,Σ2) may all be maximized independently since they all appear in separate linear terms.
Q (| sup (t) / sup)是二次型,这意味着确定最大值是相对简单的。此外,(sub 1 / sub,sub 1 / sub)和(sub 2 / sub,sub 2 / sub)都可以独立地最大化,因为它们都是以独立的线性形式出现的。
To begin, consider τ, which has the constraint τ1 + τ2=1:
To begin, consider τ, which has the constraint τ1 + τ2=1:
首先,考虑一下,约束子1 / sub + sub 2 / sub 1:
- [math]\displaystyle{ \begin{align}\boldsymbol{\tau}^{(t+1)} \lt math\gt \begin{align}\boldsymbol{\tau}^{(t+1)} Math begin { align } boldsymbol { tau } ^ {(t + 1)} &= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}}\ Q(\theta \mid \theta^{(t)} ) \\ &= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}}\ Q(\theta \mid \theta^{(t)} ) \\ 下集(黑体符号{ tau }操作者名称{ arg,max } q ( theta mid theta ^ (t)}) &= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}} \ \left\{ \left[ \sum_{i=1}^n T_{1,i}^{(t)} \right] \log \tau_1 + \left[ \sum_{i=1}^n T_{2,i}^{(t)} \right] \log \tau_2 \right\} &= \underset{\boldsymbol{\tau}} {\operatorname{arg\,max}} \ \left\{ \left[ \sum_{i=1}^n T_{1,i}^{(t)} \right] \log \tau_1 + \left[ \sum_{i=1}^n T_{2,i}^{(t)} \right] \log \tau_2 \right\} 下定义符号{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}{{{{{{{{{{{{{{{{{}} \end{align} }[/math]
\end{align}</math>
End { align } / math
This has the same form as the MLE for the binomial distribution, so
This has the same form as the MLE for the binomial distribution, so
这与二项分布的 MLE 形式相同,所以
- [math]\displaystyle{ \tau^{(t+1)}_j = \frac{\sum_{i=1}^n T_{j,i}^{(t)}}{\sum_{i=1}^n (T_{1,i}^{(t)} + T_{2,i}^{(t)} ) } = \frac{1}{n} \sum_{i=1}^n T_{j,i}^{(t)} }[/math].
[math]\displaystyle{ \tau^{(t+1)}_j = \frac{\sum_{i=1}^n T_{j,i}^{(t)}}{\sum_{i=1}^n (T_{1,i}^{(t)} + T_{2,i}^{(t)} ) } = \frac{1}{n} \sum_{i=1}^n T_{j,i}^{(t)} }[/math].
数学 ^ {(t + 1)} j frac { sum { i } ^ n t { j,i }{(t)}}{{ i } ^ n (t {1,i } ^ {(t)} + t {2,i } ^ {(t)})} frac {1}{ n }{ i }{ n }{ t { j,i }{(t)}}} / math。
For the next estimates of (μ1,Σ1):
For the next estimates of (μ1,Σ1):
下一个估计数(小于1 / 小于1 / 小于) :
- [math]\displaystyle{ \begin{align}(\boldsymbol{\mu}_1^{(t+1)},\Sigma_1^{(t+1)}) \lt math\gt \begin{align}(\boldsymbol{\mu}_1^{(t+1)},\Sigma_1^{(t+1)}) 数学开始{ align }( boldsymbol {1 ^ {(t + 1)} , Sigma 1 ^ {(t + 1)}) &= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} Q(\theta \mid \theta^{(t)} ) \\ &= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} Q(\theta \mid \theta^{(t)} ) \\ & 下设符号{ mu }1, Sigma 1}运算符{ arg,max } q ( theta mid theta ^ (t)}) &= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} \sum_{i=1}^n T_{1,i}^{(t)} \left\{ -\tfrac{1}{2} \log |\Sigma_1| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_1)^\top\Sigma_1^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_1) \right\} &= \underset{\boldsymbol{\mu}_1,\Sigma_1} \operatorname{arg\,max} \sum_{i=1}^n T_{1,i}^{(t)} \left\{ -\tfrac{1}{2} \log |\Sigma_1| -\tfrac{1}{2}(\mathbf{x}_i-\boldsymbol{\mu}_1)^\top\Sigma_1^{-1} (\mathbf{x}_i-\boldsymbol{\mu}_1) \right\} 下设符号1,Sigma 1操作名称{ arg,max }和{ i } ^ n t {1,i } ^ (t)}左-对数框架1 |-对数框架1 |-对数框架符号(1) ^-对数框架符号(1) ^-1 \end{align} }[/math].
\end{align}</math>.
End { align } / math.
This has the same form as a weighted MLE for a normal distribution, so
This has the same form as a weighted MLE for a normal distribution, so
这与正态分布的加权极大似然估计形式相同,因此
- [math]\displaystyle{ \boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}} }[/math] and [math]\displaystyle{ \Sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} }[/math]
[math]\displaystyle{ \boldsymbol{\mu}_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{1,i}^{(t)}} }[/math] and [math]\displaystyle{ \Sigma_1^{(t+1)} = \frac{\sum_{i=1}^n T_{1,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_1^{(t+1)})^\top }{\sum_{i=1}^n T_{1,i}^{(t)}} }[/math]
数学符号{1 ^ {(t + 1)} frac {1} ^ n t {1,i }{(t)} mathbf {1} ^ n t {1,i } ^ {(t)} / math 1 Sigma {(t + 1)}} frac {1} ^ n t {1,i }{(t)}} ( mathbf { x } i- boldsymbol { mu }1 ^ {(t + 1)) {(t + 1)})( mathbf { x } i- boldsymbol { mu }1 ^ {(t + 1)}){(t + 1)}) ^ top }{ i } ^ n t {1,i }{(t)}}} / math
and, by symmetry
and, by symmetry
并且,通过对称
- [math]\displaystyle{ \boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}} }[/math] and [math]\displaystyle{ \Sigma_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)})^\top }{\sum_{i=1}^n T_{2,i}^{(t)}} }[/math].
[math]\displaystyle{ \boldsymbol{\mu}_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} \mathbf{x}_i}{\sum_{i=1}^n T_{2,i}^{(t)}} }[/math] and [math]\displaystyle{ \Sigma_2^{(t+1)} = \frac{\sum_{i=1}^n T_{2,i}^{(t)} (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)}) (\mathbf{x}_i - \boldsymbol{\mu}_2^{(t+1)})^\top }{\sum_{i=1}^n T_{2,i}^{(t)}} }[/math].
数学符号{2 ^ {(t + 1)} frac {1} ^ n t {2,i }{(t)} mathbf { i }{1} ^ n t {2,i } ^ (t)} / math 2 Sigma {(t + 1)}}} frac {1}{ t {2,i }{(t)}}} ( mathbf { x } i- boldsymbol { mu }2 ^ {(t + 1)) {(t + 1)})( mathbf { x } i- boldsymbol { mu }2 ^ {(t + 1)}{(t + 1)}) ^ top }{{ i 1} ^ n t {2,i }{(t)}}} / math.
Termination
Conclude the iterative process if [math]\displaystyle{ E_{Z\mid\theta^{(t)},\mathbf{x}}[\log L(\theta^{(t)};\mathbf{x},\mathbf{Z})] \leq E_{Z\mid\theta^{(t-1)},\mathbf{x}}[\log L(\theta^{(t-1)};\mathbf{x},\mathbf{Z})] + \varepsilon }[/math] for [math]\displaystyle{ \varepsilon }[/math] below some preset threshold.
Conclude the iterative process if [math]\displaystyle{ E_{Z\mid\theta^{(t)},\mathbf{x}}[\log L(\theta^{(t)};\mathbf{x},\mathbf{Z})] \leq E_{Z\mid\theta^{(t-1)},\mathbf{x}}[\log L(\theta^{(t-1)};\mathbf{x},\mathbf{Z})] + \varepsilon }[/math] for [math]\displaystyle{ \varepsilon }[/math] below some preset threshold.
结束迭代过程如果 math e { z mid theta ^ (t)} ,mathbf { x }[ log l ( theta ^ (t)} ; mathbf { x } ,mathbf { z })] leq e { z mid theta ^ (t-1)} ,mathbf { log l (theta ^ ((t-1)})} ; mathx } ,z }] + varepsilon / silon / math for math for math vareareppon / silon / math 在某个预设阈值以下。
Generalization
The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.
The algorithm illustrated above can be generalized for mixtures of more than two multivariate normal distributions.
上述算法可以推广到多于两个多元正态分布的混合情况。
Truncated and censored regression
The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model.[31] Special cases of this model include censored or truncated observations from one normal distribution.[31]
The EM algorithm has been implemented in the case where an underlying linear regression model exists explaining the variation of some quantity, but where the values actually observed are censored or truncated versions of those represented in the model. Special cases of this model include censored or truncated observations from one normal distribution.
算法已经实现在这样的情况下,一个基础的线性回归 / 模型存在解释一些数量的变化,但实际观察到的数值是审查或截断版本的那些表示在模型中。这个模型的特殊情况包括截尾或从一个正态分布截断的观测值。
Alternatives
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches[32] or the so-called spectral techniques[33][34][citation needed]. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions[citation needed].
EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model are of increasing interest recently since they enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
Em 通常收敛到一个局部最优解,而不一定是全局最优解,收敛速度一般没有界限。这是可能的,它可以任意贫穷的高维和可能有一个指数数目的局部最优化。因此,需要其他方法来保证学习,特别是在高维度的设置。有更好的一致性保证的 EM 替代方法存在,这被称为基于时刻的方法或所谓的光谱技术。基于矩的概率模型参数学习方法近年来受到越来越多的关注,因为它们在一定条件下具有全局收敛性等保证。具有学习保证的算法可以推导出许多重要模型,如混合模型、隐马尔可夫模型等。对于这些谱方法,不会出现伪局部最优,并且在一定的正则性条件下可以一致地估计真实参数。
See also
- The EM algorithm can be viewed as a special case of the majorize-minimization (MM) algorithm.[35]
References
- ↑ Ceppelini, R.M. (1955). "The estimation of gene frequencies in a random-mating population". Ann. Hum. Genet. 20 (2): 97–115. doi:10.1111/j.1469-1809.1955.tb01360.x. PMID 13268982.
- ↑ 2.0 2.1 {{cite journal
引用错误:无效
<ref>
标签;name属性“Sundberg1974”使用不同内容定义了多次 - ↑ 3.0 3.1 3.2
}} 引用错误:无效
<ref>
标签;name属性“Sundberg1971”使用不同内容定义了多次 - ↑ 4.0 4.1 4.2 {{cite journal
Rolf Sundberg. 1971. Maximum likelihood theory and applications for distributions generated when observing a function of an exponential family variable. Dissertation, Institute for Mathematical Statistics, Stockholm University. 引用错误:无效
<ref>
标签;name属性“Sundberg1976”使用不同内容定义了多次 - ↑ See the acknowledgement by Dempster, Laird and Rubin on pages 3, 5 and 11.
- ↑ G. Kulldorff. 1961. Contributions to the theory of estimation from grouped and partially grouped samples. Almqvist & Wiksell.
- ↑ 7.0 7.1 Anders Martin-Löf. 1963. "Utvärdering av livslängder i subnanosekundsområdet" ("Evaluation of sub-nanosecond lifetimes"). ("Sundberg formula")
- ↑ 8.0 8.1 Per Martin-Löf. 1966. Statistics from the point of view of statistical mechanics. Lecture notes, Mathematical Institute, Aarhus University. ("Sundberg formula" credited to Anders Martin-Löf).
- ↑ 9.0 9.1 Per Martin-Löf. 1970. Statistika Modeller (Statistical Models): Anteckningar från seminarier läsåret 1969–1970 (Notes from seminars in the academic year 1969-1970), with the assistance of Rolf Sundberg. Stockholm University. ("Sundberg formula")
- ↑ 10.0 10.1 Martin-Löf, P. The notion of redundancy and its use as a quantitative measure of the deviation between a statistical hypothesis and a set of observational data. With a discussion by F. Abildgård, A. P. Dempster, D. Basu, D. R. Cox, A. W. F. Edwards, D. A. Sprott, G. A. Barnard, O. Barndorff-Nielsen, J. D. Kalbfleisch and G. Rasch and a reply by the author. Proceedings of Conference on Foundational Questions in Statistical Inference (Aarhus, 1973), pp. 1–42. Memoirs, No. 1, Dept. Theoret. Statist., Inst. Math., Univ. Aarhus, Aarhus, 1974.
- ↑ 11.0 11.1 Martin-Löf, Per (1974). "The notion of redundancy and its use as a quantitative measure of the discrepancy between a statistical hypothesis and a set of observational data". Scand. J. Statist. 1 (1): 3–18.
- ↑ 12.0 12.1 引用错误:无效
<ref>
标签;未给name属性为Wu
的引用提供文字 - ↑ Little, Roderick J.A.; Rubin, Donald B. (1987). Statistical Analysis with Missing Data. Wiley Series in Probability and Mathematical Statistics. New York: John Wiley & Sons. pp. 134–136. ISBN 978-0-471-80254-9.
- ↑ 14.0 14.1 Neal, Radford; Hinton, Geoffrey (1999). Michael I. Jordan. ed. A view of the EM algorithm that justifies incremental, sparse, and other variants. Cambridge, MA: MIT Press. pp. 355–368. ISBN 978-0-262-60032-3. ftp://ftp.cs.toronto.edu/pub/radford/emk.pdf. Retrieved 2009-03-22.
- ↑ 15.0 15.1 Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome (2001). "8.5 The EM algorithm". The Elements of Statistical Learning. New York: Springer. pp. 236–243. ISBN 978-0-387-95284-0.
- ↑ Lindstrom, Mary J; Bates, Douglas M (1988). "Newton—Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data". Journal of the American Statistical Association. 83 (404): 1014. doi:10.1080/01621459.1988.10478693.
- ↑ Van Dyk, David A (2000). "Fitting Mixed-Effects Models Using Efficient EM-Type Algorithms". Journal of Computational and Graphical Statistics. 9 (1): 78–98. doi:10.2307/1390614. JSTOR 1390614.
- ↑ Diffey, S. M; Smith, A. B; Welsh, A. H; Cullis, B. R (2017). "A new REML (parameter expanded) EM algorithm for linear mixed models". Australian & New Zealand Journal of Statistics. 59 (4): 433. doi:10.1111/anzs.12208.
- ↑ Matarazzo, T. J., and Pakzad, S. N. (2016). “STRIDE for Structural Identification using Expectation Maximization: Iterative Output-Only Method for Modal Identification.” Journal of Engineering Mechanics.http://ascelibrary.org/doi/abs/10.1061/(ASCE)EM.1943-7889.0000951
- ↑
{{cite journal}}
: Empty citation (help) - ↑ {{Cite journal } / ref { Cite journal |last = Einicke |last = Einicke |last = Einicke |first = G.A. |first = G.A. 首先是 g.a。 |last2 = Falco |last2 = Falco 2 Falco |first2 = G. |first2 = G. | first2 g. |last3 = Malos |last3 = Malos 3 Malos |first3 = J.T. |first3 = J.T. 首先是 j.t。 |title = EM Algorithm State Matrix Estimation for Navigation |title = EM Algorithm State Matrix Estimation for Navigation 导航中的 EM 算法状态矩阵估计 |journal = IEEE Signal Processing Letters |journal = IEEE Signal Processing Letters 信号处理快报 |volume = 17 |volume = 17 第17卷 |issue = 5 |issue = 5 第五期 |pages = 437–440 |pages = 437–440 第437-440页 |date=May 2010 |date=May 2010 2010年5月 |postscript = |postscript = | 附言! ——不—— |doi = 10.1109/LSP.2010.2043151 |doi = 10.1109/LSP.2010.2043151 | doi 10.1109 / LSP. 2010.2043151 |bibcode = 2010ISPL...17..437E }}
- ↑ . Bibcode:2010ISPL...17..437E.
{{cite journal}}
: Cite journal requires|journal=
(help); Missing or empty|title=
(help) - ↑ {{Cite journal 2010ISPL... 17. . 437 e } / ref { Cite journal |last = Einicke |last = Einicke |last = Einicke |first = G.A. |first = G.A. 首先是 g.a。 |last2 = Falco |last2 = Falco 2 Falco |first2 = G. |first2 = G. | first2 g. |last3 = Dunn |last3 = Dunn 3 Dunn |first3 = M.T. |first3 = M.T. 第一天3点。 |last4 = Reid |last4 = Reid 4 Reid |first4 = D.C. |first4 = D.C. 首先是华盛顿特区。 |title = Iterative Smoother-Based Variance Estimation |title = Iterative Smoother-Based Variance Estimation 基于迭代平滑器的方差估计 |journal = IEEE Signal Processing Letters |journal = IEEE Signal Processing Letters 信号处理快报 |volume = 19 |volume = 19 第19卷 |issue = 5 |issue = 5 第五期 |pages = 275–278 |pages = 275–278 第275-278页 |date=May 2012 |date=May 2012 2012年5月 |postscript = |postscript = | 附言! ——不—— |bibcode = 2012ISPL...19..275E |bibcode = 2012ISPL...19..275E | bibcode 2012ISPL... 19. . 275 e |doi = 10.1109/LSP.2012.2190278 |doi = 10.1109/LSP.2012.2190278 | doi 10.1109 / LSP. 2012.2190278 }}
- ↑
{{cite journal}}
: Empty citation (help) - ↑ {{Cite journal } / ref { Cite journal |last = Einicke |last = Einicke |last = Einicke |first = G.A. |first = G.A. 首先是 g.a。 |title = Iterative Filtering and Smoothing of Measurements Possessing Poisson Noise |title = Iterative Filtering and Smoothing of Measurements Possessing Poisson Noise 具有泊松噪声的测量数据的迭代滤波和平滑 |journal = IEEE Transactions on Aerospace and Electronic Systems |journal = IEEE Transactions on Aerospace and Electronic Systems 航空航天与电子系统杂志 |volume = 51 |volume = 51 第51卷 |issue = 3 |issue = 3 第三期 |pages = 2205–2011 |pages = 2205–2011 2205-2011页 |date=Sep 2015 |date=Sep 2015 日期: 2015年9月 |postscript = |postscript = | 附言! ——不—— |doi = 10.1109/TAES.2015.140843 |doi = 10.1109/TAES.2015.140843 10.1109 / TAES. 2015.140843 |bibcode = 2015ITAES..51.2205E |bibcode = 2015ITAES..51.2205E | bibcode 2015ITAES. . 51.2205 e }}
- ↑ Jamshidian, Mortaza; Jennrich, Robert I. (1997). "Acceleration of the EM Algorithm by using Quasi-Newton Methods". Journal of the Royal Statistical Society, Series B. 59 (2): 569–587. doi:10.1111/1467-9868.00083. MR 1452026.
- ↑ Liu, C (1998). "Parameter expansion to accelerate EM: The PX-EM algorithm". Biometrika. 85 (4): 755–770. CiteSeerX 10.1.1.134.9617. doi:10.1093/biomet/85.4.755.
- ↑ Meng, Xiao-Li; Rubin, Donald B. (1993). "Maximum likelihood estimation via the ECM algorithm: A general framework". Biometrika. 80 (2): 267–278. doi:10.1093/biomet/80.2.267. MR 1243503.
- ↑ Liu, Chuanhai; Rubin, Donald B (1994). "The ECME Algorithm: A Simple Extension of EM and ECM with Faster Monotone Convergence". Biometrika. 81 (4): 633. doi:10.1093/biomet/81.4.633. JSTOR 2337067.
- ↑ Hunter DR and Lange K (2004), A Tutorial on MM Algorithms, The American Statistician, 58: 30-37
- ↑ 31.0 31.1 Wolynetz, M.S. (1979). "Maximum likelihood estimation in a linear model from confined and censored normal data". Journal of the Royal Statistical Society, Series C. 28 (2): 195–206. doi:10.2307/2346749. JSTOR 2346749.
- ↑ Pearson, Karl (1894). "Contributions to the Mathematical Theory of Evolution". Philosophical Transactions of the Royal Society of London A. 185: 71–110. Bibcode:1894RSPTA.185...71P. doi:10.1098/rsta.1894.0003. ISSN 0264-3820. JSTOR 90667.
- ↑ Shaban, Amirreza; Mehrdad, Farajtabar; Bo, Xie; Le, Song; Byron, Boots (2015). "Learning Latent Variable Models by Improving Spectral Solutions with Exterior Point Method" (PDF). UAI: 792–801.
- ↑ Balle, Borja Quattoni, Ariadna Carreras, Xavier (2012-06-27). Local Loss Optimization in Operator Models: A New Insight into Spectral Learning. OCLC 815865081.
- ↑ Lange, Kenneth. "The MM Algorithm" (PDF).
Further reading
- Hogg, Robert; McKean, Joseph; Craig, Allen (2005). Introduction to Mathematical Statistics. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 359–364.
- Dellaert, Frank (2002). "The Expectation Maximization Algorithm". CiteSeerX 10.1.1.9.9735.
{{cite journal}}
: Cite journal requires|journal=
(help) gives an easier explanation of EM algorithm as to lowerbound maximization.
- [[Christopher Bishop
作者链接 Christopher Bishop |Bishop, Christopher M.
1. 克里斯托弗 m。]] (2006
2006年). Pattern Recognition and Machine Learning
模式识别与机器学习. Springer
出版商斯普林格. ISBN [[Special:BookSources/978-0-387-31073-2
[国际标准图书编号978-0-387-31073-2]|978-0-387-31073-2
[国际标准图书编号978-0-387-31073-2]]].
}}
}}
- Gupta, M. R.; Chen, Y. (2010). "Theory and Use of the EM Algorithm". Foundations and Trends in Signal Processing. 4 (3): 223–296. CiteSeerX 10.1.1.219.6830. doi:10.1561/2000000034. A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet.
- Bilmes, Jeff (1998). "A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models". CiteSeerX 10.1.1.28.613.
{{cite journal}}
: Cite journal requires|journal=
(help) includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models.
- McLachlan, Geoffrey J.; Krishnan, Thriyambakam (2008). The EM Algorithm and Extensions (2nd ed.). Hoboken: Wiley. ISBN 978-0-471-20170-0.
External links
- Various 1D, 2D and 3D demonstrations of EM together with Mixture Modeling are provided as part of the paired SOCR activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings.
- Class hierarchy in C++ (GPL) including Gaussian Mixtures
- The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay includes simple examples of the EM algorithm such as clustering using the soft k-means algorithm, and emphasizes the variational view of the EM algorithm, as described in Chapter 33.7 of version 7.2 (fourth edition).
- Variational Algorithms for Approximate Bayesian Inference, by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs (chapters).
- The Expectation Maximization Algorithm: A short tutorial, A self-contained derivation of the EM Algorithm by Sean Borman.
- The EM Algorithm, by Xiaojin Zhu.
- EM algorithm and variants: an informal tutorial by Alexis Roche. A concise and very clear description of EM and many interesting variants.
Category:Estimation methods
类别: 估算方法
Category:Machine learning algorithms
类别: 机器学习算法
Category:Missing data
类别: 缺失数据
Category:Statistical algorithms
类别: 统计算法
Category:Optimization algorithms and methods
类别: 优化算法和方法
Category:Cluster analysis algorithms
类别: 数据聚类算法
This page was moved from wikipedia:en:Expectation–maximization algorithm. Its edit history can be viewed at EM算法/edithistory
- 有参考文献错误的页面
- CS1 errors: empty citation
- CS1 errors: missing periodical
- CS1 errors: missing title
- 调用重复模板参数的页面
- Articles with short description
- All articles with unsourced statements
- Articles with unsourced statements from November 2017
- Articles with invalid date parameter in template
- 含有受损文件链接的页面
- Articles with unsourced statements from April 2019
- Estimation methods
- Machine learning algorithms
- Missing data
- Statistical algorithms
- Optimization algorithms and methods
- Cluster analysis algorithms
- 待整理页面