更改

跳到导航 跳到搜索
删除117字节 、 2020年10月27日 (二) 15:36
无编辑摘要
第95行: 第95行:  
With a normal distribution, differential entropy is maximized for a given variance.  A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.
 
With a normal distribution, differential entropy is maximized for a given variance.  A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.
   −
在一个正态分布下,对于给定的方差,微分熵是最大的。在所有方差相等的随机变量中,高斯型随机变量的熵最大,或者在均值和方差约束下的最大熵分布是高斯型随机变量。
+
在一个正态分布下,对于给定的方差,微分熵是最大的。在所有方差相等的随机变量中,高斯型随机变量的熵最大,或者说在均值和方差约束下的最大熵分布是高斯型随机变量。
      第105行: 第105行:  
Let <math>g(x)</math> be a Gaussian PDF with mean μ and variance <math>\sigma^2</math> and <math>f(x)</math> an arbitrary PDF with the same variance. Since differential entropy is translation invariant we can assume that <math>f(x)</math> has the same mean of <math>\mu</math> as <math>g(x)</math>.
 
Let <math>g(x)</math> be a Gaussian PDF with mean μ and variance <math>\sigma^2</math> and <math>f(x)</math> an arbitrary PDF with the same variance. Since differential entropy is translation invariant we can assume that <math>f(x)</math> has the same mean of <math>\mu</math> as <math>g(x)</math>.
   −
设 g (x) </math > 是一个高斯分布的 PDF,平均 μ 和方差 < math > sigma ^ 2 </math > 和 < math > f (x) </math > 一个任意的 PDF,方差相同。由于微分熵是平移不变的,我们可以假设 < math > f (x) </math > 与 < math > g (x) </math > 具有相同的平均值。
+
设g(x) 是一个高斯分布的 PDF,平均值μ 和方差σ2和f(x)一个任意的 PDF,方差相同。由于微分熵是平移不变的,我们可以假设 f(x) 与g(x)具有相同的平均值。
    
Thus, differential entropy does not share all properties of discrete entropy.
 
Thus, differential entropy does not share all properties of discrete entropy.
第125行: 第125行:  
Now note that
 
Now note that
   −
现在请注意
+
现在注意
    
  | last = Kraskov
 
  | last = Kraskov
第179行: 第179行:  
because the result does not depend on <math>f(x)</math> other than through the variance.  Combining the two results yields
 
because the result does not depend on <math>f(x)</math> other than through the variance.  Combining the two results yields
   −
因为结果并不依赖于 < math > f (x) </math > ,而是通过方差。将这两个结果结合起来就会产生结果
+
因为结果并不依赖于f(x),而是通过方差。将这两个结果结合起来就会产生结果
      第191行: 第191行:  
with equality when <math>f(x)=g(x)</math> following from the properties of Kullback–Leibler divergence.
 
with equality when <math>f(x)=g(x)</math> following from the properties of Kullback–Leibler divergence.
   −
当 < math > f (x) = g (x) </math > 遵循 Kullback-Leibler 分歧的性质时。
+
当f (x) = g (x)遵循 Kullback-Leibler 分歧的性质时。
          
==Properties of differential entropy==
 
==Properties of differential entropy==
 
+
微分熵的性质
 
* For probability densities <math>f</math> and <math>g</math>, the [[Kullback–Leibler divergence]] <math>D_{KL}(f || g)</math> is greater than or equal to 0 with equality only if <math>f=g</math> [[almost everywhere]]. Similarly, for two random variables <math>X</math> and <math>Y</math>, <math>I(X;Y) \ge 0</math> and <math>h(X|Y) \le h(X)</math> with equality [[if and only if]] <math>X</math> and <math>Y</math> are [[Statistical independence|independent]].
 
* For probability densities <math>f</math> and <math>g</math>, the [[Kullback–Leibler divergence]] <math>D_{KL}(f || g)</math> is greater than or equal to 0 with equality only if <math>f=g</math> [[almost everywhere]]. Similarly, for two random variables <math>X</math> and <math>Y</math>, <math>I(X;Y) \ge 0</math> and <math>h(X|Y) \le h(X)</math> with equality [[if and only if]] <math>X</math> and <math>Y</math> are [[Statistical independence|independent]].
  
153

个编辑

导航菜单