更改

跳到导航 跳到搜索
添加249字节 、 2020年10月27日 (二) 15:50
无编辑摘要
第260行: 第260行:     
However, differential entropy does not have other desirable properties:
 
However, differential entropy does not have other desirable properties:
 
+
然而,微分熵并没有期望的性质
 
* It is not invariant under [[change of variables]], and is therefore most useful with dimensionless variables.
 
* It is not invariant under [[change of variables]], and is therefore most useful with dimensionless variables.
 
+
它在变量变化下不是不变的,因此对无量纲变量最有用
 
* It can be negative.
 
* It can be negative.
 
+
它可以为负
 
Let <math>X</math> be an exponentially distributed random variable with parameter <math>\lambda</math>, that is, with probability density function
 
Let <math>X</math> be an exponentially distributed random variable with parameter <math>\lambda</math>, that is, with probability density function
   第278行: 第278行:     
==Maximization in the normal distribution==
 
==Maximization in the normal distribution==
 
+
正态分布中的最大化
 
===Theorem===
 
===Theorem===
 
+
理论
 
Its differential entropy is then
 
Its differential entropy is then
 
+
它的微分熵就会
它的微分熵就在那时
  −
 
   
With a [[normal distribution]], differential entropy is maximized for a given variance.  A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.<ref name="cover_thomas" />{{rp|255}}
 
With a [[normal distribution]], differential entropy is maximized for a given variance.  A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.<ref name="cover_thomas" />{{rp|255}}
 
+
对于正态分布,对于给定的方差,微分熵是最大的。在所有等方差随机变量中,高斯随机变量的熵最大,或者在均值和方差约束下的最大熵分布是高斯分布
 
{|
 
{|
   第298行: 第296行:     
===Proof===
 
===Proof===
 
+
证明
 
| <math>h_e(X)\,</math>
 
| <math>h_e(X)\,</math>
   第375行: 第373行:  
Here, <math>h_e(X)</math> was used rather than <math>h(X)</math> to make it explicit that the logarithm was taken to base e, to simplify the calculation.
 
Here, <math>h_e(X)</math> was used rather than <math>h(X)</math> to make it explicit that the logarithm was taken to base e, to simplify the calculation.
   −
在这里,使用 < math > h _ e (x) </math > 而不是 < math > h (x) </math > 来明确对数是以 e 为底,以简化计算。
+
在这里,使用he(X)而不是h(X) 来明确对数是以 e 为底,以简化计算。
    
because the result does not depend on <math>f(x)</math> other than through the variance.  Combining the two results yields
 
because the result does not depend on <math>f(x)</math> other than through the variance.  Combining the two results yields
第385行: 第383行:  
The differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable <math>X</math> and estimator <math>\widehat{X}</math> the following holds:
 
The differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable <math>X</math> and estimator <math>\widehat{X}</math> the following holds:
   −
对于估计量的预期平方误差,微分熵产生一个下限。对于任何随机变量 < math > x </math > 和估计量 < math > widedhat { x } </math > 下面的值:
+
对于估计量的预期平方误差,微分熵产生一个下限。对于任何随机变量x和估计量 下面的值:
      第397行: 第395行:  
with equality if and only if <math>X</math> is a Gaussian random variable and <math>\widehat{X}</math> is the mean of <math>X</math>.
 
with equality if and only if <math>X</math> is a Gaussian random variable and <math>\widehat{X}</math> is the mean of <math>X</math>.
   −
当且仅当 < math > x </math > 是一个 Gaussian 随机变量,而 < math > x } </math > 是 < math > x </math > 的平均值。
+
当且仅当 x是一个 Gaussian 随机变量,而 < math > x } </math > 是 < math > x </math > 的平均值。
    
This result may also be demonstrated using the [[variational calculus]]. A Lagrangian function with two [[Lagrangian multiplier]]s may be defined as:
 
This result may also be demonstrated using the [[variational calculus]]. A Lagrangian function with two [[Lagrangian multiplier]]s may be defined as:
153

个编辑

导航菜单