第12行: |
第12行: |
| | | |
| | | |
− | [[File:EM Clustering of Old Faithful data.gif|right|frame|Old Faithful火山喷发数据]的 EM 聚类。随机初始模型(由于轴的不同尺度,看起来是两个非常平坦和宽的球体)适合观测数据。在第一次迭代中,模型发生了实质性的变化,但随后收敛到间歇泉的两个模态。可视化使用 ELKI. ]] | + | [[File:EM Clustering of Old Faithful data.gif|right|frame|Old Faithful火山喷发数据]的 EM 聚类。随机初始模型(由于轴的不同尺度,看起来是两个非常平和宽的区域)可以拟合观测的数据。在第一次迭代中,模型发生了实质性的变化,但随后收敛到间歇泉的两个模态。可视化使用 ELKI. ]] |
| | | |
| | | |
| | | |
| | | |
− | | + | ==历史== |
− | ==History== | |
− | | |
− | | |
− | ==History==
| |
| The EM algorithm was explained and given its name in a classic 1977 paper by [[Arthur P. Dempster|Arthur Dempster]], [[Nan Laird]], and [[Donald Rubin]].<ref name="Dempster1977"> | | The EM algorithm was explained and given its name in a classic 1977 paper by [[Arthur P. Dempster|Arthur Dempster]], [[Nan Laird]], and [[Donald Rubin]].<ref name="Dempster1977"> |
| {{cite journal | | {{cite journal |
第67行: |
第63行: |
| | | |
| | | |
− | Arthur Dempster, Nan Laird, and Donald Rubin于1977年发表的一篇经典的论文中解释和命名了EM算法。他们指出该方法被早期作者“多次在特殊条件下提出”。Cedric Smith提出的估计等位基因频率的基因计数法便是其中之一。基于与Per Martin-Löf和Anders Martin-Löf的合作,Rolf Sundberg在他的学位论文和若干论文中详述了针对指数族的EM方法。1977年Dempster–Laird–Rubin的论文推广了该方法并针对更为广泛的问题进行了收敛性分析。 | + | Arthur Dempster, Nan Laird, and Donald Rubin于1977年发表的一篇经典的论文中解释和命名了EM算法。他们指出该方法被早期作者“多次在特殊条件下提出”。Cedric Smith提出的估计等位基因频率的基因计数法便是其中之一。基于与Per Martin-Löf和Anders Martin-Löf的合作,Rolf Sundberg在他的学位论文和若干论文中详述了针对指数族的EM方法。1977年Dempster–Laird–Rubin的论文推广了该方法并针对更为广泛的问题进行了收敛性分析,该论文确立EM方法作为统计分析的重要工具。 |
− | Dempster–Laird–Rubin算法的收敛分析存在缺陷,C. F. Jeff Wu在1983年发表了一项修正的收敛性分析。Wu的工作建立了EM方法在指数族之外的收敛。 | + | Dempster–Laird–Rubin算法的收敛分析存在缺陷,C. F. Jeff Wu在1983年发表了一项修正的收敛性分析。Dempster–Laird–Rubin称,Wu的工作建立了EM方法在指数族之外的收敛。 |
| | | |
| | | |
| == 介绍 == | | == 介绍 == |
| The EM algorithm is used to find (local) [[maximum likelihood]] parameters of a [[statistical model]] in cases where the equations cannot be solved directly. Typically these models involve [[latent variable]]s in addition to unknown [[parameters]] and known data observations. That is, either [[missing values]] exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a [[mixture model]] can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. | | The EM algorithm is used to find (local) [[maximum likelihood]] parameters of a [[statistical model]] in cases where the equations cannot be solved directly. Typically these models involve [[latent variable]]s in addition to unknown [[parameters]] and known data observations. That is, either [[missing values]] exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a [[mixture model]] can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. |
| + | |
| + | EM 算法用于在无法直接求解方程的情况下查找统计模型的(局部)最大似然参数。通常,除了未知参数和已知数据观察之外,这些模型还涉及潜在变量。也就是说,要么数据中存在缺失值,要么可以通过假设存在更多未观察到的数据点来更简单地制定模型。例如,通过假设每个观察到的数据点都有一个对应的未观察到的数据点或潜在变量,指定每个数据点所属的混合成分,可以更简单地描述混合模型。 |
| | | |
| Finding a maximum likelihood solution typically requires taking the [[derivative]]s of the [[likelihood function]] with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. | | Finding a maximum likelihood solution typically requires taking the [[derivative]]s of the [[likelihood function]] with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. |
| + | |
| + | 寻找最大似然解通常需要对所有未知值、参数和潜在变量求似然函数的导数,并同时求解所得方程。在具有潜在变量的统计模型中,这通常是不可能的。相反,结果通常是一组互锁方程,其中参数的解需要潜在变量的值,反之亦然,但将一组方程代入另一组会产生无法求解的方程。 |
| | | |
| The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a [[saddle point]].<ref name="Wu" /> In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have [[Mathematical singularity|singularities]] in them, i.e., nonsensical maxima. For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. | | The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven that in this context it does, and that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a maximum or a [[saddle point]].<ref name="Wu" /> In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have [[Mathematical singularity|singularities]] in them, i.e., nonsensical maxima. For example, one of the ''solutions'' that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points. |
| | | |
− | ==Description== | + | EM 算法从观察开始到找到方法可以数值求解这两组方程。可以简单地为两组未知数中的一组选择任意值,使用它们来估计第二组,然后使用这些新值来找到对第一组更好的估计,然后在两者之间保持交替,直到结果值都收敛到固定点。这并不明显,但可以证明在这种情况下它确实有效,并且可能性的导数在该点(任意接近)为零,这反过来意味着该点是最大值或一个鞍点。通常,可能会出现多个最大值,但不能保证会找到全局最大值。一些可能性也有奇点,即无意义的最大值。例如,EM 在混合模型中可能找到的解决方案之一包含把其中一个成分设置为具有零方差,并且相同成分的平均参数和一个数据点等价。 |
| + | |
| + | ==描述== |
| | | |
| Given the [[statistical model]] which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or [[missing values]] <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a [[likelihood function]] <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the [[maximum likelihood estimate]] (MLE) of the unknown parameters is determined by maximizing the [[marginal likelihood]] of the observed data | | Given the [[statistical model]] which generates a set <math>\mathbf{X}</math> of observed data, a set of unobserved latent data or [[missing values]] <math>\mathbf{Z}</math>, and a vector of unknown parameters <math>\boldsymbol\theta</math>, along with a [[likelihood function]] <math>L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}\mid\boldsymbol\theta)</math>, the [[maximum likelihood estimate]] (MLE) of the unknown parameters is determined by maximizing the [[marginal likelihood]] of the observed data |
第108行: |
第110行: |
| With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio. | | With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio. |
| | | |
− | 由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的工具,价格和管理投资组合的风险。
| + | EM算法试图通过迭代应用这两个步骤来找到边际可能性的最大似然估计:由于能够处理丢失的数据和观察不明变量,EM 正成为一个有用的对投资组合进行定价和管理风险的工具。 |
| | | |
| :''Expectation step (E step)'': Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the [[expected value]] of the log [[likelihood function]] of <math>\boldsymbol\theta</math>, with respect to the current [[conditional probability distribution|conditional distribution]] of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>: | | :''Expectation step (E step)'': Define <math>Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> as the [[expected value]] of the log [[likelihood function]] of <math>\boldsymbol\theta</math>, with respect to the current [[conditional probability distribution|conditional distribution]] of <math>\mathbf{Z}</math> given <math>\mathbf{X}</math> and the current estimates of the parameters <math>\boldsymbol\theta^{(t)}</math>: |
第116行: |
第118行: |
| The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM. | | The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM. |
| | | |
− | EM 算法(及其快速变化的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射计算机断层扫描、单光子发射 X射线计算机断层成像和 X射线计算机断层成像中。下面是 EM 的其他更快的变体。 | + | EM 算法(及其更快变体的有序子集期望最大化)也广泛应用于医学图像重建,特别是在正电子发射断层扫描、单光子发射计算机断层扫描和X射线计算机断层扫描中。下面是 EM 的其他更快的变体。 |
| | | |
| | | |
第124行: |
第126行: |
| In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis). | | In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis). |
| | | |
− | 在结构工程中,利用期望最大化(STRIDE)算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)。
| + | M步:找到使数量最大化的参数:在结构工程中,利用期望最大化(STRIDE)算法进行结构识别是一种仅输出的方法,用于利用传感器数据识别结构系统的自然振动特性(见运行模态分析)。 |
| | | |
| ::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math> | | ::<math>\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta}{\operatorname{arg\,max}} \ Q(\boldsymbol\theta\mid\boldsymbol\theta^{(t)}) \, </math> |
第134行: |
第136行: |
| A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. | | A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. |
| | | |
− | 卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批状态估计。然而,这些最小方差解需要状态空间模型参数的估计。EM 算法可用于求解状态和参数的联合估计问题。 | + | 卡尔曼滤波器通常用于在线状态估计,最小方差平滑器可用于离线或批状态估计。然而,这些最小方差解需要状态空间模型参数的估计。EM 算法可用于求解联合状态和参数估计问题。 |
| | | |
| #The observed data points <math>\mathbf{X}</math> may be [[discrete random variable|discrete]] (taking values in a finite or countably infinite set) or [[continuous random variable|continuous]] (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations. | | #The observed data points <math>\mathbf{X}</math> may be [[discrete random variable|discrete]] (taking values in a finite or countably infinite set) or [[continuous random variable|continuous]] (taking values in an uncountably infinite set). Associated with each data point may be a vector of observations. |
第150行: |
第152行: |
| E-step | | E-step |
| | | |
− | E-step
| + | 参数是连续的,有两种情况:和所有数据点相关联的参数,或和一个潜在变量的一个具体值相关联的参数(也就是与所有对应有该值的潜在变量的数据点相关联)。然而,EM也可以应用到其他模型中。 |
− | | |
− | | |
| | | |
| Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates. | | Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates. |
第181行: |
第181行: |
| | | |
| The algorithm as just described monotonically approaches a local minimum of the cost function. | | The algorithm as just described monotonically approaches a local minimum of the cost function. |
| + | |
| + | 刚刚描述的算法单调地接近成本函数的局部最小值。 |
| | | |
| <math>\widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2,</math> | | <math>\widehat{\sigma}^2_v = \frac{1}{N} \sum_{k=1}^N {(z_k-\widehat{x}_k)}^2,</math> |
第186行: |
第188行: |
| 2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| | | |
− | == Properties == | + | == 特性 == |
| | | |
| where <math>\widehat{x}_k</math> are scalar output estimates calculated by a filter or a smoother from N scalar measurements <math>z_k</math>. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by | | where <math>\widehat{x}_k</math> are scalar output estimates calculated by a filter or a smoother from N scalar measurements <math>z_k</math>. The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by |
第193行: |
第195行: |
| | | |
| Speaking of an expectation (E) step is a bit of a [[misnomer]]. What are calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm. | | Speaking of an expectation (E) step is a bit of a [[misnomer]]. What are calculated in the first step are the fixed, data-dependent parameters of the function ''Q''. Once the parameters of ''Q'' are known, it is fully determined and is maximized in the second (M) step of an EM algorithm. |
| + | |
| + | 说到期望 (E) 步骤有点用词不当。 在第一步中计算的是函数 Q 的固定的、与数据相关的参数。一旦 Q 的参数已知,就可以完全确定并在 EM 算法的第二步 (M) 中最大化。 |
| | | |
| <math>\widehat{\sigma}^2_w = \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat_k)}^2,</math> | | <math>\widehat{\sigma}^2_w = \frac{1}{N} \sum_{k=1}^N {(\widehat{x}_{k+1}-\widehat{F}\widehat_k)}^2,</math> |
第201行: |
第205行: |
| | | |
| Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a [[maximum likelihood estimator]]. For [[bimodal distribution|multimodal distributions]], this means that an EM algorithm may converge to a [[local maximum]] of the observed data likelihood function, depending on starting values. A variety of heuristic or [[metaheuristic]] approaches exist to escape a local maximum, such as random-restart [[hill climbing]] (starting with several different random initial estimates ''θ''<sup>(''t'')</sup>), or applying [[simulated annealing]] methods. | | Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a [[maximum likelihood estimator]]. For [[bimodal distribution|multimodal distributions]], this means that an EM algorithm may converge to a [[local maximum]] of the observed data likelihood function, depending on starting values. A variety of heuristic or [[metaheuristic]] approaches exist to escape a local maximum, such as random-restart [[hill climbing]] (starting with several different random initial estimates ''θ''<sup>(''t'')</sup>), or applying [[simulated annealing]] methods. |
| + | |
| + | 尽管 EM 迭代确实增加了观察到的数据(即边际)似然函数,但不能保证序列收敛到最大似然估计量。 对于多峰分布,这意味着 EM 算法可能会收敛到观察到的数据似然函数的局部最大值,这取决于起始值。 存在各种启发式或元启发式方法来逃避局部最大值,例如随机重启爬山(从几个不同的随机初始估计值 θ(t) 开始),或应用模拟退火方法。 |
| | | |
| where <math>\widehat{x}_k</math> and <math>\widehat{x}_{k+1}</math> are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via | | where <math>\widehat{x}_k</math> and <math>\widehat{x}_{k+1}</math> are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via |
第212行: |
第218行: |
| < math > widehat { f } = frac { sum { k = 1} ^ n (widehat { x }{ k + 1}-widehat { f } widehat { x } _ k)}{ sum { k = 1} ^ n widehat { x } _ k ^ 2}。 </math > | | < math > widehat { f } = frac { sum { k = 1} ^ n (widehat { x }{ k + 1}-widehat { f } widehat { x } _ k)}{ sum { k = 1} ^ n widehat { x } _ k ^ 2}。 </math > |
| | | |
− | EM is especially useful when the likelihood is an [[exponential family]]: the E step becomes the sum of expectations of [[sufficient statistic]]s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive [[closed-form expression]] updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of [[Per Martin-Löf]] and [[Anders Martin-Löf]]).<ref name="Sundberg1971"/><ref name="Sundberg1976"/><ref name="Martin-Löf1963"/><ref name="Martin-Löf1966"/><ref name="Martin-Löf1970"/><ref name="Martin-Löf1974a"/><ref name="Martin-Löf1974b"/> | + | EM is especially useful when the likelihood is an [[exponential family]]: the E step becomes the sum of expectations of [[sufficient statistic]]s, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive [[closed-form expression]] updates for each step, using the Sundberg formula (published by Rolf Sundberg using unpublished results of [[Per Martin-Löf]] and [[Anders Martin-Löf]]). |
− | | |
| | | |
| + | 当似然是指数族时,EM 尤其有用:E 步成为充分统计量的期望总和,而 M 步涉及最大化线性函数。 在这种情况下,通常可以使用 Sundberg 公式(由 Rolf Sundberg 发布,使用 Per Martin-Löf 和 Anders Martin-Löf 的未发表结果)为每个步骤导出封闭形式的表达式更新。 |
| | | |
| The convergence of parameter estimates such as those above are well studied. | | The convergence of parameter estimates such as those above are well studied. |
第222行: |
第228行: |
| The EM method was modified to compute [[maximum a posteriori]] (MAP) estimates for [[Bayesian inference]] in the original paper by Dempster, Laird, and Rubin. | | The EM method was modified to compute [[maximum a posteriori]] (MAP) estimates for [[Bayesian inference]] in the original paper by Dempster, Laird, and Rubin. |
| | | |
| + | 在 Dempster、Laird 和 Rubin 的原始论文中,EM 方法被修改为计算贝叶斯推理的最大后验 (MAP) 估计。 |
| | | |
| + | Other methods exist to find maximum likelihood estimates, such as [[gradient descent]], [[conjugate gradient]], or variants of the [[Gauss–Newton algorithm]]. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. |
| | | |
− | Other methods exist to find maximum likelihood estimates, such as [[gradient descent]], [[conjugate gradient]], or variants of the [[Gauss–Newton algorithm]]. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
| + | 存在其他方法来寻找最大似然估计,例如梯度下降、共轭梯度或高斯-牛顿算法的变体。 与 EM 不同,此类方法通常需要评估似然函数的一阶和/或二阶导数。 |
| | | |
| A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods. | | A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods. |
| | | |
− | 针对 EM 算法收敛速度慢的问题,提出了一些改进方法,如共轭梯度法和修正牛顿法(Newton-Raphson)。此外,EM 还可以与约束估计方法一起使用。
| + | 针对有时EM 算法收敛速度慢的问题,一些改进方法被提出,如共轭梯度法和修正牛顿法(Newton-Raphson)。此外,EM 还可以与约束估计方法一起使用。 |
| | | |
− | == Proof of correctness == | + | == 正确性证明 == |
| | | |
− | Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data". | + | Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "using a 'covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data". |
| | | |
| 参数扩展期望最大化(PX-EM)算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。 | | 参数扩展期望最大化(PX-EM)算法通过“利用输入完整数据中获得的额外信息,通过‘协方差调整’来校正 m 步的分析” ,从而提高了计算速度。 |
第242行: |
第250行: |
| Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θ<sub>i</sub> is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm. | | Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θ<sub>i</sub> is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm. |
| | | |
− | 期望条件最大化(ECM)用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数 θ < sub > i </sub > 单独最大化,条件是其他参数保持不变。本身也可以扩展为期望条件最大化(ECME)算法。 | + | 期望条件最大化(ECM)用一系列条件最大化(CM)步骤代替每个 m 步骤,其中每个参数 θ < sub > i 单独最大化,条件是其他参数保持不变。本身也可以扩展为期望条件最大化(ECME)算法。 |
| | | |
| For any <math>\mathbf{Z}</math> with non-zero probability <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta)</math>, we can write | | For any <math>\mathbf{Z}</math> with non-zero probability <math>p(\mathbf{Z}\mid\mathbf{X},\boldsymbol\theta)</math>, we can write |
第288行: |
第296行: |
| EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference. | | EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference. |
| | | |
− | EM 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了一个关于潜在变量的概率分布估计(在贝叶斯风格)以及 θ 的点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本的这可能是想要的,给出一个概率分布超过 θ 和潜在变量。贝叶斯推理方法简单地将 θ 作为另一个潜变量来处理。在这个范例中,e 和 m 步骤之间的区别就消失了。如果使用上述因子化 q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括 θ) ,并优化他们一次一个。现在,每次迭代需要 k 个步骤,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 q 只依赖于它的马尔可夫包层,所以局部消息传递可以用于有效的推理。 | + | EM 是一个部分非贝叶斯,最大似然方法。它的最终结果给出了一个关于潜在变量的概率分布估计(在贝叶斯风格)以及 θ 的点估计(无论是最大似然估计还是后验模式)。一个完整的贝叶斯版本即给出一个关于θ 和潜在变量的概率分布可能是想要的。贝叶斯推理方法简单地将 θ 作为另一个潜变量来处理。在这个范例中,E 和 M 步骤之间的区别就消失了。如果使用上述因子化 Q 近似(变分贝叶斯) ,求解可以迭代每个潜变量(现在包括 θ) 并每次优化一个。现在,每次迭代需要 k 个步骤,其中 k 是潜变量的数量。对于图形模型,这是很容易做到的,因为每个变量的新 Q 只依赖于它的马尔可夫包层,所以局部信息传递可以用于有效的推理。 |
| | | |
| where <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> is defined by the negated sum it is replacing. | | where <math>H(\boldsymbol\theta\mid\boldsymbol\theta^{(t)})</math> is defined by the negated sum it is replacing. |
第348行: |
第356行: |
| | | |
| | | |
− | == As a maximization–maximization procedure == | + | == 作为最大化-最大化过程 == |
| | | |
| The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of [[coordinate descent]].<ref name="neal1999">{{cite book|last1=Neal |first=Radford |last2=Hinton |first2=Geoffrey |author-link2=Geoffrey Hinton |title=A view of the EM algorithm that justifies incremental, sparse, and other variants |journal=Learning in Graphical Models |editor=[[Michael I. Jordan]] |pages= 355–368 |publisher= MIT Press |location=Cambridge, MA |year=1999 |isbn=978-0-262-60032-3 |url=ftp://ftp.cs.toronto.edu/pub/radford/emk.pdf |accessdate=2009-03-22}}</ref><ref name="hastie2001">{{cite book|last1=Hastie |first1= Trevor|author-link1=Trevor Hastie|last2=Tibshirani|first2=Robert|author-link2=Robert Tibshirani|last3=Friedman|first3=Jerome |year=2001 |title=The Elements of Statistical Learning |url=https://archive.org/details/elementsstatisti00thas_842 |url-access=limited |isbn=978-0-387-95284-0 |publisher=Springer |location=New York |chapter=8.5 The EM algorithm |pages=[https://archive.org/details/elementsstatisti00thas_842/page/n237 236]–243}}</ref> Consider the function: | | The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of [[coordinate descent]].<ref name="neal1999">{{cite book|last1=Neal |first=Radford |last2=Hinton |first2=Geoffrey |author-link2=Geoffrey Hinton |title=A view of the EM algorithm that justifies incremental, sparse, and other variants |journal=Learning in Graphical Models |editor=[[Michael I. Jordan]] |pages= 355–368 |publisher= MIT Press |location=Cambridge, MA |year=1999 |isbn=978-0-262-60032-3 |url=ftp://ftp.cs.toronto.edu/pub/radford/emk.pdf |accessdate=2009-03-22}}</ref><ref name="hastie2001">{{cite book|last1=Hastie |first1= Trevor|author-link1=Trevor Hastie|last2=Tibshirani|first2=Robert|author-link2=Robert Tibshirani|last3=Friedman|first3=Jerome |year=2001 |title=The Elements of Statistical Learning |url=https://archive.org/details/elementsstatisti00thas_842 |url-access=limited |isbn=978-0-387-95284-0 |publisher=Springer |location=New York |chapter=8.5 The EM algorithm |pages=[https://archive.org/details/elementsstatisti00thas_842/page/n237 236]–243}}</ref> Consider the function: |