更改

跳到导航 跳到搜索
删除15,750字节 、 2020年10月25日 (日) 17:49
第1行: 第1行: −
此词条暂由Henry翻译。{{Short description|Concept in information theory}}
+
此词条暂由彩云小译翻译,翻译字数共1054,未经人工整理和审校,带来阅读不便,请见谅。
 
      +
{{Short description|Concept in information theory}}
    
{{Information theory}}
 
{{Information theory}}
信息论
            +
'''Differential entropy''' (also referred to as '''continuous entropy''') is a concept in [[information theory]] that began as an attempt by Shannon to extend the idea of (Shannon) [[information entropy|entropy]], a measure of average [[surprisal]] of a [[random variable]], to continuous [[probability distribution]]s. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not.<ref>{{cite journal |author=Jaynes, E.T. |authorlink=Edwin Thompson Jaynes |title=Information Theory And Statistical Mechanics |journal=Brandeis University Summer Institute Lectures in Theoretical Physics |volume=3 |issue=sect. 4b |year=1963 |url=http://bayes.wustl.edu/etj/articles/brandeis.pdf |format=PDF}}</ref>{{rp|181–218}} The actual continuous version of discrete entropy is the  [[limiting density of discrete points]] (LDDP). Differential entropy (described here) is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete [[information entropy|entropy]].
    +
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not.
   −
 
+
微分熵(也称为连续熵)是信息论中的一个概念,最初由香农尝试将(香农)熵的概念扩展到连续的概率分布,香农熵是衡量一个随机变量的平均惊人程度的指标。不幸的是,香农没有推导出这个公式,而只是假设它是离散熵的正确连续模拟,但它不是。
'''Differential entropy''' (also referred to as '''continuous entropy''') is a concept in [[information theory]] that began as an attempt by Shannon to extend the idea of (Shannon) [[information entropy|entropy]], a measure of average [[surprisal]] of a [[random variable]], to continuous [[probability distribution]]s. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not.{{Citation needed|date=May 2018}} The actual continuous version of discrete entropy is the  [[limiting density of discrete points]] (LDDP). Differential entropy (described here) is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete [[information entropy|entropy]].
  −
 
  −
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the  limiting density of discrete points (LDDP). Differential entropy (described here) is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.
  −
 
  −
微分熵(也称为连续熵)是信息论中的一个概念,最初由香农尝试将(香农)熵的概念扩展到连续的概率分布,香农熵是衡量一个随机变量的平均惊异程度的指标。不幸的是,香农没有推导出这个公式,而只是假设它是离散熵的正确连续模拟,但它并不是。离散熵的实际连续形式是离散点的极限密度(LDDP)。在文献中经常会遇到微分熵(这里提到的),但它只是LDDP的一个极限情况,并且它失去了与离散熵的基本联系。
  −
 
            +
<math>h(X_1, \ldots, X_n) = \sum_{i=1}^{n} h(X_i|X_1, \ldots, X_{i-1}) \leq \sum_{i=1}^{n} h(X_i)</math>.
   −
==Definition==
+
< math > h (x _ 1,ldots,xn) = sum _ { i = 1} ^ { n } h (x _ i | x _ 1,ldots,x _ { i-1}) leq sum _ { i = 1} ^ { n } h (x _ i) </math > .
    
==Definition==
 
==Definition==
  −
定义
      
Let <math>X</math> be a random variable with a [[probability density function]] <math>f</math> whose [[support (mathematics)|support]] is a set <math>\mathcal X</math>. The ''differential entropy'' <math>h(X)</math> or <math>h(f)</math> is defined as<ref name="cover_thomas">{{cite book|first1=Thomas M.|first2=Joy A.|last1=Cover|last2=Thomas|isbn=0-471-06259-6|title=Elements of Information Theory|year=1991|publisher=Wiley|location=New York|url=https://archive.org/details/elementsofinform0000cove|url-access=registration}}</ref>{{rp|243}}
 
Let <math>X</math> be a random variable with a [[probability density function]] <math>f</math> whose [[support (mathematics)|support]] is a set <math>\mathcal X</math>. The ''differential entropy'' <math>h(X)</math> or <math>h(f)</math> is defined as<ref name="cover_thomas">{{cite book|first1=Thomas M.|first2=Joy A.|last1=Cover|last2=Thomas|isbn=0-471-06259-6|title=Elements of Information Theory|year=1991|publisher=Wiley|location=New York|url=https://archive.org/details/elementsofinform0000cove|url-access=registration}}</ref>{{rp|243}}
   −
Let <math>X</math> be a random variable with a probability density function <math>f</math> whose support is a set <math>\mathcal X</math>. The differential entropy <math>h(X)</math> or <math>h(f)</math> is defined as
+
<math>h(X+c) = h(X)</math>
 
  −
假设x是一个随机变量,它的数学 f / math 支持集合 math / mathcal x / math。这个数学 f / math 是一个概率密度函数。微分熵数学 h (x) / math 或 math h (f) / math 定义为
  −
 
      +
[ math > h (x + c) = h (x) </math >
      第39行: 第31行:  
{{Equation box 1
 
{{Equation box 1
   −
{{Equation box 1
+
In particular, for a constant <math>a</math>
   −
{方程式方框1
+
特别是对于一个常量
    
|indent =
 
|indent =
   −
|indent =
+
<math>h(aX) = h(X)+ \log |a|</math>
   −
不会有事的
+
H (aX) = h (x) + log | a | </math
    
|title=
 
|title=
   −
|title=
+
For a vector valued random variable <math>\mathbf{X}</math> and an invertible (square) matrix <math>\mathbf{A}</math>
   −
标题
+
对于向量值随机变量 < math > mathbf { x } </math > 和可逆矩阵 < math > mathbf { a } </math >
    
|equation = <math>h(X) = -\int_\mathcal{X} f(x)\log f(x)\,dx</math>
 
|equation = <math>h(X) = -\int_\mathcal{X} f(x)\log f(x)\,dx</math>
   −
|equation = <math>h(X) = -\int_\mathcal{X} f(x)\log f(x)\,dx</math>
+
<math>h(\mathbf{A}\mathbf{X})=h(\mathbf{X})+\log \left( |\det \mathbf{A}| \right)</math>
 
  −
| 方程式数学 h (x)-整数{ x } f (x) log f (x) ,dx / math
     −
|cellpadding= 6
+
< math > h (mathbf { a } mathbf { x }) = h (mathbf { x }) + log left (| det mathbf { a } | right) </math >
    
|cellpadding= 6
 
|cellpadding= 6
  −
6号手术室
      
|border
 
|border
   −
|border
+
<math>h(\mathbf{Y}) \leq h(\mathbf{X}) + \int f(x) \log \left\vert \frac{\partial m}{\partial x} \right\vert dx</math>
   −
边界
+
[ math > h (mathbf { y }) leq h (mathbf { x }) + int f (x) log left vert frac { partial m }{ partial x } right vert dx </math >
    
|border colour = #0073CF
 
|border colour = #0073CF
   −
|border colour = #0073CF
+
where <math>\left\vert \frac{\partial m}{\partial x} \right\vert</math> is the Jacobian of the transformation <math>m</math>.
   −
0073CF
+
其中“ math” > “ left vert”{ partial m }{ partial x }“ right vert” >/math > 是变换的雅可比矩阵。
    
|background colour=#F5FFFA}}
 
|background colour=#F5FFFA}}
  −
|background colour=#F5FFFA}}
  −
  −
5 / fffa }
            +
However, differential entropy does not have other desirable properties:
    +
然而,微分熵并没有其他令人满意的特性:
    
For probability distributions which don't have an explicit density function expression, but have an explicit [[quantile function]] expression, <math>Q(p)</math>, then <math>h(Q)</math> can be defined in terms of the derivative of <math>Q(p)</math> i.e. the quantile density function <math>Q'(p)</math> as <ref>{{Citation |last1=Vasicek  |first1=Oldrich |year=1976 |title=A Test for Normality Based on Sample Entropy |journal=[[Journal of the Royal Statistical Society, Series B]] |volume=38 |issue=1 |jstor=2984828 |postscript=. }}</ref>{{rp|54–59}}
 
For probability distributions which don't have an explicit density function expression, but have an explicit [[quantile function]] expression, <math>Q(p)</math>, then <math>h(Q)</math> can be defined in terms of the derivative of <math>Q(p)</math> i.e. the quantile density function <math>Q'(p)</math> as <ref>{{Citation |last1=Vasicek  |first1=Oldrich |year=1976 |title=A Test for Normality Based on Sample Entropy |journal=[[Journal of the Royal Statistical Society, Series B]] |volume=38 |issue=1 |jstor=2984828 |postscript=. }}</ref>{{rp|54–59}}
  −
For probability distributions which don't have an explicit density function expression, but have an explicit quantile function expression, <math>Q(p)</math>, then <math>h(Q)</math> can be defined in terms of the derivative of <math>Q(p)</math> i.e. the quantile density function <math>Q'(p)</math> as
  −
  −
对于没有明确密度函数表达式但有明确分位函数表达式的概率分布,数学 q (p) / math,那么数学 h (q) / math 可以用数学 q (p) / math 的导数来定义。分位数密度函数 q’(p) / 数学作为
  −
  −
        第101行: 第81行:  
:<math>h(Q) = \int_0^1 \log Q'(p)\,dp</math>.
 
:<math>h(Q) = \int_0^1 \log Q'(p)\,dp</math>.
   −
<math>h(Q) = \int_0^1 \log Q'(p)\,dp</math>.
+
A modification of differential entropy that addresses these drawbacks is the relative information entropy, also known as the Kullback–Leibler divergence, which includes an invariant measure factor (see limiting density of discrete points).
 
  −
数学 h (q) int 0 ^ 1 log q’(p) ,dp / math。
  −
 
      +
针对这些缺点,微分熵的一个改进是相对熵,也被称为 Kullback-Leibler 分歧,其中包括一个不变测度因子(见离散点的极限密度)。
          
As with its discrete analog, the units of differential entropy depend on the base of the [[logarithm]], which is usually 2 (i.e., the units are [[bit]]s). See [[logarithmic units]] for logarithms taken in different bases. Related concepts such as [[joint entropy|joint]], [[conditional entropy|conditional]] differential entropy, and [[Kullback–Leibler divergence|relative entropy]] are defined in a similar fashion. Unlike the discrete analog, the differential entropy has an offset that depends on the units used to measure <math>X</math>.<ref name="gibbs">{{cite book |last=Gibbs |first=Josiah Willard |authorlink=Josiah Willard Gibbs |title=[[Elementary Principles in Statistical Mechanics|Elementary Principles in Statistical Mechanics, developed with especial reference to the rational foundation of thermodynamics]] |year=1902 |publisher=Charles Scribner's Sons |location=New York}}</ref>{{rp|183–184}} For example, the differential entropy of a quantity measured in millimeters will be {{not a typo|log(1000)}} more than the same quantity measured in meters; a dimensionless quantity will have differential entropy of {{not a typo|log(1000)}} more than the same quantity divided by 1000.
 
As with its discrete analog, the units of differential entropy depend on the base of the [[logarithm]], which is usually 2 (i.e., the units are [[bit]]s). See [[logarithmic units]] for logarithms taken in different bases. Related concepts such as [[joint entropy|joint]], [[conditional entropy|conditional]] differential entropy, and [[Kullback–Leibler divergence|relative entropy]] are defined in a similar fashion. Unlike the discrete analog, the differential entropy has an offset that depends on the units used to measure <math>X</math>.<ref name="gibbs">{{cite book |last=Gibbs |first=Josiah Willard |authorlink=Josiah Willard Gibbs |title=[[Elementary Principles in Statistical Mechanics|Elementary Principles in Statistical Mechanics, developed with especial reference to the rational foundation of thermodynamics]] |year=1902 |publisher=Charles Scribner's Sons |location=New York}}</ref>{{rp|183–184}} For example, the differential entropy of a quantity measured in millimeters will be {{not a typo|log(1000)}} more than the same quantity measured in meters; a dimensionless quantity will have differential entropy of {{not a typo|log(1000)}} more than the same quantity divided by 1000.
  −
As with its discrete analog, the units of differential entropy depend on the base of the logarithm, which is usually 2 (i.e., the units are bits). See logarithmic units for logarithms taken in different bases. Related concepts such as joint, conditional differential entropy, and relative entropy are defined in a similar fashion. Unlike the discrete analog, the differential entropy has an offset that depends on the units used to measure <math>X</math>. For example, the differential entropy of a quantity measured in millimeters will be  more than the same quantity measured in meters; a dimensionless quantity will have differential entropy of  more than the same quantity divided by 1000.
  −
  −
和它的离散类似物一样,微分熵的单位依赖于对数的底,通常是2(也就是说,单位是位)。见对数单位的对数采取在不同的基地。相关的概念,如联合,条件微分熵,和相对熵,都是以类似的方式定义的。与离散模拟不同,微分熵的偏移量取决于用来测量数学 x / math 的单位。例如,以毫米为单位测量的量的微分熵将大于以米为单位测量的相同量; 无量纲量的微分熵将大于相同量除以1000。
  −
  −
        第121行: 第93行:  
One must take care in trying to apply properties of discrete entropy to differential entropy, since probability density functions can be greater than 1. For example, the [[Uniform distribution (continuous)|uniform distribution]] <math>\mathcal{U}(0,1/2)</math> has ''negative'' differential entropy
 
One must take care in trying to apply properties of discrete entropy to differential entropy, since probability density functions can be greater than 1. For example, the [[Uniform distribution (continuous)|uniform distribution]] <math>\mathcal{U}(0,1/2)</math> has ''negative'' differential entropy
   −
One must take care in trying to apply properties of discrete entropy to differential entropy, since probability density functions can be greater than 1. For example, the uniform distribution <math>\mathcal{U}(0,1/2)</math> has negative differential entropy
+
With a normal distribution, differential entropy is maximized for a given variance.  A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.
 
  −
因为概率密度函数可以大于1,所以在尝试将离散熵的性质应用于微分熵时必须小心谨慎。例如,均匀分布 math  mathcal { u }(0,1 / 2) / math 具有负微分熵
  −
 
      +
在一个正态分布下,对于给定的方差,微分熵是最大的。在所有方差相等的随机变量中,高斯型随机变量的熵最大,或者在均值和方差约束下的最大熵分布是高斯型随机变量。
          
:<math>\int_0^\frac{1}{2} -2\log(2)\,dx=-\log(2)\,</math>.
 
:<math>\int_0^\frac{1}{2} -2\log(2)\,dx=-\log(2)\,</math>.
  −
<math>\int_0^\frac{1}{2} -2\log(2)\,dx=-\log(2)\,</math>.
  −
  −
数学0 ^  frac {1}-2 log (2) ,dx- log (2) ,/ math。
            +
Let <math>g(x)</math> be a Gaussian PDF with mean μ and variance <math>\sigma^2</math> and <math>f(x)</math> an arbitrary PDF with the same variance. Since differential entropy is translation invariant we can assume that <math>f(x)</math> has the same mean of <math>\mu</math> as <math>g(x)</math>.
    +
设 g (x) </math > 是一个高斯分布的 PDF,平均 μ 和方差 < math > sigma ^ 2 </math > 和 < math > f (x) </math > 一个任意的 PDF,方差相同。由于微分熵是平移不变的,我们可以假设 < math > f (x) </math > 与 < math > g (x) </math > 具有相同的平均值。
    
Thus, differential entropy does not share all properties of discrete entropy.
 
Thus, differential entropy does not share all properties of discrete entropy.
  −
Thus, differential entropy does not share all properties of discrete entropy.
  −
  −
因此,微分熵并不具有离散熵的所有属性。
            +
Consider the Kullback–Leibler divergence between the two distributions
    +
考虑两个分布之间的 Kullback-Leibler 散度
    
Note that the continuous [[mutual information]] <math>I(X;Y)</math> has the distinction of retaining its fundamental significance as a measure of discrete information since it is actually the limit of the discrete mutual information of ''partitions'' of <math>X</math> and <math>Y</math> as these partitions become finer and finer.  Thus it is invariant under non-linear [[homeomorphisms]] (continuous and uniquely invertible maps), <ref>{{cite journal
 
Note that the continuous [[mutual information]] <math>I(X;Y)</math> has the distinction of retaining its fundamental significance as a measure of discrete information since it is actually the limit of the discrete mutual information of ''partitions'' of <math>X</math> and <math>Y</math> as these partitions become finer and finer.  Thus it is invariant under non-linear [[homeomorphisms]] (continuous and uniquely invertible maps), <ref>{{cite journal
   −
Note that the continuous mutual information <math>I(X;Y)</math> has the distinction of retaining its fundamental significance as a measure of discrete information since it is actually the limit of the discrete mutual information of partitions of <math>X</math> and <math>Y</math> as these partitions become finer and finer.  Thus it is invariant under non-linear homeomorphisms (continuous and uniquely invertible maps), <ref>{{cite journal
+
<math> 0 \leq D_{KL}(f || g) = \int_{-\infty}^\infty f(x) \log \left( \frac{f(x)}{g(x)} \right) dx = -h(f) - \int_{-\infty}^\infty f(x)\log(g(x)) dx.</math>
   −
请注意,连续互信息 math i (x; y) / math 的区别在于保留了它作为离散信息度量的基本意义,因为它实际上是数学 x / math 和数学 y / math 的分区间的离散互信息的极限,当这些分区变得越来越精细时。因此它在非线性同胚(连续且唯一可逆的映射)下是不变的
+
(f | | g) = int _ {-infty } ^ infty f (x) log left (frac { f (x)}{ g (x)} right) dx =-h (f)-int _ {-infty } ^ infty f (x) log (g (x)) dx
    
  | first = Alexander
 
  | first = Alexander
   −
| first = Alexander
+
Now note that
   −
第一个亚历山大
+
现在请注意
    
  | last = Kraskov
 
  | last = Kraskov
   −
| last = Kraskov
+
<math>\begin{align}
   −
最后的克拉斯科夫
+
1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3
    
  |author2=Stögbauer, Grassberger
 
  |author2=Stögbauer, Grassberger
   −
  |author2=Stögbauer, Grassberger
+
  \int_{-\infty}^\infty f(x)\log(g(x)) dx &= \int_{-\infty}^\infty f(x)\log\left( \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\right) dx \\
   −
作者: 格拉斯伯格,st gbauer
+
Int _ {-infty } ^ infty f (x) log (g (x)) dx & = int _ {-infty } ^ infty f (x) log left (frac {1}{ sqrt {2 pi sigma ^ 2} e ^ {-frac {(x-mu) ^ 2}{2 sigma ^ 2} right) dx
    
  | year = 2004
 
  | year = 2004
   −
  | year = 2004
+
  &= \int_{-\infty}^\infty f(x) \log\frac{1}{\sqrt{2\pi\sigma^2}} dx + \log(e)\int_{-\infty}^\infty f(x)\left( -\frac{(x-\mu)^2}{2\sigma^2}\right) dx \\
   −
2004年
+
& = int _ {-infty } ^ infty f (x) log frac {1}{ sqrt {2 pi sigma ^ 2} dx + log (e) int _ {-infty } ^ infty f (x) left (- frac {(x-mu) ^ 2}{2 sigma ^ 2} right) dx
    
  | title = Estimating mutual information
 
  | title = Estimating mutual information
   −
  | title = Estimating mutual information
+
  &= -\tfrac{1}{2}\log(2\pi\sigma^2) - \log(e)\frac{\sigma^2}{2\sigma^2} \\
   −
估计互信息
+
& =-tfrac {1}{2} log (2 pi sigma ^ 2)-log (e) frac { sigma ^ 2}{2 sigma ^ 2}
    
  | journal = [[Physical Review E]]
 
  | journal = [[Physical Review E]]
   −
  | journal = Physical Review E
+
  &= -\tfrac{1}{2}\left(\log(2\pi\sigma^2) + \log(e)\right) \\
   −
杂志物理评论 e
+
& =-tfrac {1}{2}左(log (2 pi sigma ^ 2) + log (e) right)
    
  | volume = 60
 
  | volume = 60
   −
  | volume = 60
+
  &= -\tfrac{1}{2}\log(2\pi e \sigma^2)  \\
   −
第60卷
+
& =-tfrac {1}{2} log (2 pi e sigma ^ 2)
    
  | pages = 066138
 
  | pages = 066138
   −
  | pages = 066138
+
  &= -h(g)
   −
066138页
+
& =-h (g)
    
  | doi =10.1103/PhysRevE.69.066138
 
  | doi =10.1103/PhysRevE.69.066138
   −
| doi =10.1103/PhysRevE.69.066138
+
\end{align}</math>
   −
10.1103 / physarve. 69.066138
+
结束{ align } </math >
    
|arxiv = cond-mat/0305641 |bibcode = 2004PhRvE..69f6138K }}</ref> including linear <ref name = Reza>{{ cite book | title = An Introduction to Information Theory | author = Fazlollah M. Reza | publisher = Dover Publications, Inc., New York | origyear = 1961| year = 1994 | isbn = 0-486-68210-2 | url = https://books.google.com/books?id=RtzpRAiX6OgC&pg=PA8&dq=intitle:%22An+Introduction+to+Information+Theory%22++%22entropy+of+a+simple+source%22&as_brr=0&ei=zP79Ro7UBovqoQK4g_nCCw&sig=j3lPgyYrC3-bvn1Td42TZgTzj0Q }}</ref> transformations of <math>X</math> and <math>Y</math>, and still represents the amount of discrete information that can be transmitted over a channel that admits a continuous space of values.
 
|arxiv = cond-mat/0305641 |bibcode = 2004PhRvE..69f6138K }}</ref> including linear <ref name = Reza>{{ cite book | title = An Introduction to Information Theory | author = Fazlollah M. Reza | publisher = Dover Publications, Inc., New York | origyear = 1961| year = 1994 | isbn = 0-486-68210-2 | url = https://books.google.com/books?id=RtzpRAiX6OgC&pg=PA8&dq=intitle:%22An+Introduction+to+Information+Theory%22++%22entropy+of+a+simple+source%22&as_brr=0&ei=zP79Ro7UBovqoQK4g_nCCw&sig=j3lPgyYrC3-bvn1Td42TZgTzj0Q }}</ref> transformations of <math>X</math> and <math>Y</math>, and still represents the amount of discrete information that can be transmitted over a channel that admits a continuous space of values.
   −
|arxiv = cond-mat/0305641 |bibcode = 2004PhRvE..69f6138K }}</ref> including linear  transformations of <math>X</math> and <math>Y</math>, and still represents the amount of discrete information that can be transmitted over a channel that admits a continuous space of values.
+
because the result does not depend on <math>f(x)</math> other than through the variance. Combining the two results yields
   −
0305641 | bibcode 2004PhRvE. . 69 f6138K } / ref,包括 math x / math 和 math y / math 的线性转换,并且仍然表示可以通过允许连续空间的值的通道传输的离散信息量。
+
因为结果并不依赖于 < math > f (x) </math > ,而是通过方差。将这两个结果结合起来就会产生结果
          +
<math> h(g) - h(f) \geq 0 \!</math>
    +
[数学]-[数学]
    
For the direct analogue of discrete entropy extended to the continuous space, see  [[limiting density of discrete points]].
 
For the direct analogue of discrete entropy extended to the continuous space, see  [[limiting density of discrete points]].
   −
For the direct analogue of discrete entropy extended to the continuous space, see  limiting density of discrete points.
+
with equality when <math>f(x)=g(x)</math> following from the properties of Kullback–Leibler divergence.
 
  −
将离散熵的直接模拟推广到连续空间,请参阅离散点的极限密度。
  −
 
      +
当 < math > f (x) = g (x) </math > 遵循 Kullback-Leibler 分歧的性质时。
          
==Properties of differential entropy==
 
==Properties of differential entropy==
  −
==Properties of differential entropy==
  −
  −
微分熵的特性
      
* For probability densities <math>f</math> and <math>g</math>, the [[Kullback–Leibler divergence]] <math>D_{KL}(f || g)</math> is greater than or equal to 0 with equality only if <math>f=g</math> [[almost everywhere]]. Similarly, for two random variables <math>X</math> and <math>Y</math>, <math>I(X;Y) \ge 0</math> and <math>h(X|Y) \le h(X)</math> with equality [[if and only if]] <math>X</math> and <math>Y</math> are [[Statistical independence|independent]].
 
* For probability densities <math>f</math> and <math>g</math>, the [[Kullback–Leibler divergence]] <math>D_{KL}(f || g)</math> is greater than or equal to 0 with equality only if <math>f=g</math> [[almost everywhere]]. Similarly, for two random variables <math>X</math> and <math>Y</math>, <math>I(X;Y) \ge 0</math> and <math>h(X|Y) \le h(X)</math> with equality [[if and only if]] <math>X</math> and <math>Y</math> are [[Statistical independence|independent]].
    +
This result may also be demonstrated using the variational calculus. A Lagrangian function with two Lagrangian multipliers may be defined as:
    +
这个结果也可以用变分法来证明。具有两个拉格朗日乘数的拉格朗日函数可定义为:
    
* The chain rule for differential entropy holds as in the discrete case<ref name="cover_thomas" />{{rp|253}}
 
* The chain rule for differential entropy holds as in the discrete case<ref name="cover_thomas" />{{rp|253}}
  −
      
::<math>h(X_1, \ldots, X_n) = \sum_{i=1}^{n} h(X_i|X_1, \ldots, X_{i-1}) \leq \sum_{i=1}^{n} h(X_i)</math>.
 
::<math>h(X_1, \ldots, X_n) = \sum_{i=1}^{n} h(X_i|X_1, \ldots, X_{i-1}) \leq \sum_{i=1}^{n} h(X_i)</math>.
   −
<math>h(X_1, \ldots, X_n) = \sum_{i=1}^{n} h(X_i|X_1, \ldots, X_{i-1}) \leq \sum_{i=1}^{n} h(X_i)</math>.
+
<math>L=\int_{-\infty}^\infty g(x)\ln(g(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty g(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>
   −
数学 h (x1, ldots,xn) sum { i1} ^ { n } h (xi | x1, ldots,x { i-1}) leq  sum { i1} ^ { n } h (xi) / math。
+
< math > l = int _ {-infty } ^ infty g (x) ln (g (x)) ,dx-lambda _ 0 left (1-int _ {-infty } ^ infty g (x) ,dx 右)-lambda left (sigma ^ 2-int _ {-infty } ^ infty g (x)(x-mu) ^ 2,dx 右) </math >
    
* Differential entropy is translation invariant, i.e. for a constant <math>c</math>.<ref name="cover_thomas" />{{rp|253}}
 
* Differential entropy is translation invariant, i.e. for a constant <math>c</math>.<ref name="cover_thomas" />{{rp|253}}
  −
      
::<math>h(X+c) = h(X)</math>
 
::<math>h(X+c) = h(X)</math>
   −
<math>h(X+c) = h(X)</math>
+
where g(x) is some function with mean μ. When the entropy of g(x) is at a maximum and the constraint equations, which consist of the normalization condition <math>\left(1=\int_{-\infty}^\infty g(x)\,dx\right)</math> and the requirement of fixed variance <math>\left(\sigma^2=\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>, are both satisfied, then a small variation δg(x) about g(x) will produce a variation δL about L which is equal to zero:
   −
数学 h (x + c) h (x) / math
+
其中 g (x)是平均 μ 的函数。当 g (x)的熵处于最大值时,由归一化条件 < math > 左(1 = int _ {-infty } ^ infty g (x) ,dx 右) </math > 和固定方差 < 左(sigma ^ 2 = int _ {-infty } ^ infty (x)(x)(x-mu) ^ 2,dx 右) </math > 组成的约束方程都满足时,那么关于 g (x)的一个小变化 δg (x)将产生一个等于零的关于 l 的变化 δl:
    
* Differential entropy is in general not invariant under arbitrary invertible maps.
 
* Differential entropy is in general not invariant under arbitrary invertible maps.
  −
      
:: In particular, for a constant <math>a</math>
 
:: In particular, for a constant <math>a</math>
   −
In particular, for a constant <math>a</math>
+
<math>0=\delta L=\int_{-\infty}^\infty \delta g(x)\left (\ln(g(x))+1+\lambda_0+\lambda(x-\mu)^2\right )\,dx</math>
   −
特别是对于一个常数的数学 a / 数学
+
0 = delta l = int _ {-infty } ^ infty delta g (x) left (ln (g (x)) + 1 + lambda _ 0 + lambda (x-mu) ^ 2 right) ,dx </math >
    
:::<math>h(aX) = h(X)+ \log |a|</math>
 
:::<math>h(aX) = h(X)+ \log |a|</math>
  −
<math>h(aX) = h(X)+ \log |a|</math>
  −
  −
数学 h (aX) h (x) +  log | a | / math
      
:: For a vector valued random variable <math>\mathbf{X}</math> and an invertible (square) [[matrix (mathematics)|matrix]] <math>\mathbf{A}</math>
 
:: For a vector valued random variable <math>\mathbf{X}</math> and an invertible (square) [[matrix (mathematics)|matrix]] <math>\mathbf{A}</math>
   −
For a vector valued random variable <math>\mathbf{X}</math> and an invertible (square) matrix <math>\mathbf{A}</math>
+
Since this must hold for any small δg(x), the term in brackets must be zero, and solving for g(x) yields:
   −
对于向量值随机变量 math  mathbf { x } / math 和可逆矩阵 math  mathbf { a } / math
+
因为这对任何小的 δg (x)都成立,括号中的项必须为零,求 g (x)的结果是:
    
:::<math>h(\mathbf{A}\mathbf{X})=h(\mathbf{X})+\log \left( |\det \mathbf{A}| \right)</math><ref name="cover_thomas" />{{rp|253}}
 
:::<math>h(\mathbf{A}\mathbf{X})=h(\mathbf{X})+\log \left( |\det \mathbf{A}| \right)</math><ref name="cover_thomas" />{{rp|253}}
  −
<math>h(\mathbf{A}\mathbf{X})=h(\mathbf{X})+\log \left( |\det \mathbf{A}| \right)</math>
  −
  −
Math h ( mathbf { a } mathbf { x }) h ( mathbf { x }) +  log 左(| det  mathbf { a } | 右) / math
      
* In general, for a transformation from a random vector to another random vector with same dimension <math>\mathbf{Y}=m \left(\mathbf{X}\right)</math>, the corresponding entropies are related via
 
* In general, for a transformation from a random vector to another random vector with same dimension <math>\mathbf{Y}=m \left(\mathbf{X}\right)</math>, the corresponding entropies are related via
    +
<math>g(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2}</math>
    +
< math > g (x) = e ^ {-lambda _ 0-1-lambda (x-mu) ^ 2} </math >
    
::<math>h(\mathbf{Y}) \leq h(\mathbf{X}) + \int f(x) \log \left\vert \frac{\partial m}{\partial x} \right\vert dx</math>
 
::<math>h(\mathbf{Y}) \leq h(\mathbf{X}) + \int f(x) \log \left\vert \frac{\partial m}{\partial x} \right\vert dx</math>
  −
<math>h(\mathbf{Y}) \leq h(\mathbf{X}) + \int f(x) \log \left\vert \frac{\partial m}{\partial x} \right\vert dx</math>
  −
  −
数学 h ( mathbf { y }) leq h ( mathbf { x }) +  int f (x) log  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f  f
      
:where <math>\left\vert \frac{\partial m}{\partial x} \right\vert</math> is the [[Jacobian matrix and determinant|Jacobian]] of the transformation <math>m</math>.<ref>{{cite web |title=proof of upper bound on differential entropy of f(X) |work=[[Stack Exchange]] |date=April 16, 2016 |url=https://math.stackexchange.com/q/1745670 }}</ref> The above inequality becomes an equality if the transform is a bijection. Furthermore, when <math>m</math> is a rigid rotation, translation, or combination thereof, the Jacobian determinant is always 1, and <math>h(Y)=h(X)</math>.
 
:where <math>\left\vert \frac{\partial m}{\partial x} \right\vert</math> is the [[Jacobian matrix and determinant|Jacobian]] of the transformation <math>m</math>.<ref>{{cite web |title=proof of upper bound on differential entropy of f(X) |work=[[Stack Exchange]] |date=April 16, 2016 |url=https://math.stackexchange.com/q/1745670 }}</ref> The above inequality becomes an equality if the transform is a bijection. Furthermore, when <math>m</math> is a rigid rotation, translation, or combination thereof, the Jacobian determinant is always 1, and <math>h(Y)=h(X)</math>.
   −
where <math>\left\vert \frac{\partial m}{\partial x} \right\vert</math> is the Jacobian of the transformation <math>m</math>. The above inequality becomes an equality if the transform is a bijection. Furthermore, when <math>m</math> is a rigid rotation, translation, or combination thereof, the Jacobian determinant is always 1, and <math>h(Y)=h(X)</math>.
+
Using the constraint equations to solve for λ<sub>0</sub> and λ yields the normal distribution:
   −
其中,数学的部分 m 是变换数学 m / math 的雅可比矩阵。如果变换是一个双射,则上述不等式成为等式。此外,当数学 m / math 为刚性旋转、平移或其组合时,Jacobian 行列式总是为1,math h (y) h (x) / math。
+
用约束方程求解 λ < sub > 0 </sub > 和 λ 得到正态分布:
    
* If a random vector <math>X \in \mathbb{R}^n</math> has mean zero and [[covariance]] matrix <math>K</math>, <math>h(\mathbf{X}) \leq \frac{1}{2} \log(\det{2 \pi e K}) = \frac{1}{2} \log[(2\pi e)^n \det{K}]</math> with equality if and only if <math>X</math> is [[Multivariate normal distribution#Joint normality|jointly gaussian]] (see [[#Maximization in the normal distribution|below]]).<ref name="cover_thomas" />{{rp|254}}
 
* If a random vector <math>X \in \mathbb{R}^n</math> has mean zero and [[covariance]] matrix <math>K</math>, <math>h(\mathbf{X}) \leq \frac{1}{2} \log(\det{2 \pi e K}) = \frac{1}{2} \log[(2\pi e)^n \det{K}]</math> with equality if and only if <math>X</math> is [[Multivariate normal distribution#Joint normality|jointly gaussian]] (see [[#Maximization in the normal distribution|below]]).<ref name="cover_thomas" />{{rp|254}}
第307行: 第255行:        +
<math>g(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}</math>
   −
 
+
< math > g (x) = frac {1}{ sqrt {2 pi sigma ^ 2} e ^ {-frac {(x-mu) ^ 2}{2 sigma ^ 2}} </math >
 
      
However, differential entropy does not have other desirable properties:
 
However, differential entropy does not have other desirable properties:
  −
However, differential entropy does not have other desirable properties:
  −
  −
然而,微分熵并没有其他令人满意的特性:
      
* It is not invariant under [[change of variables]], and is therefore most useful with dimensionless variables.
 
* It is not invariant under [[change of variables]], and is therefore most useful with dimensionless variables.
  −
      
* It can be negative.
 
* It can be negative.
    +
Let <math>X</math> be an exponentially distributed random variable with parameter <math>\lambda</math>, that is, with probability density function
    +
设 x 是一个指数分布的随机变量,它的参数是 λ,也就是概率密度函数
    
A modification of differential entropy that addresses these drawbacks is the '''relative information entropy''', also known as the Kullback–Leibler divergence, which includes an [[invariant measure]] factor (see [[limiting density of discrete points]]).
 
A modification of differential entropy that addresses these drawbacks is the '''relative information entropy''', also known as the Kullback–Leibler divergence, which includes an [[invariant measure]] factor (see [[limiting density of discrete points]]).
  −
A modification of differential entropy that addresses these drawbacks is the relative information entropy, also known as the Kullback–Leibler divergence, which includes an invariant measure factor (see limiting density of discrete points).
  −
  −
针对这些缺点,微分熵的一个改进是相对熵,也被称为 Kullback-Leibler 分歧,其中包括一个不变测度因子(见离散点的极限密度)。
            +
<math>f(x) = \lambda e^{-\lambda x} \mbox{ for } x \geq 0.</math>
    +
{ for } x geq 0. </math >
    
==Maximization in the normal distribution==
 
==Maximization in the normal distribution==
  −
==Maximization in the normal distribution==
  −
  −
正态分布下的最大化
      
===Theorem===
 
===Theorem===
   −
===Theorem===
+
Its differential entropy is then
   −
定理
+
它的微分熵就在那时
    
With a [[normal distribution]], differential entropy is maximized for a given variance.  A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.<ref name="cover_thomas" />{{rp|255}}
 
With a [[normal distribution]], differential entropy is maximized for a given variance.  A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.<ref name="cover_thomas" />{{rp|255}}
   −
With a normal distribution, differential entropy is maximized for a given variance.  A Gaussian random variable has the largest entropy amongst all random variables of equal variance, or, alternatively, the maximum entropy distribution under constraints of mean and variance is the Gaussian.
+
{|
 
  −
在正态分布下,对于给定的方差,微分熵最大化。在所有方差相等的随机变量中,高斯型随机变量的熵最大,或者,在均值和方差约束下的最大熵分布是高斯型随机变量。
  −
 
  −
 
  −
 
  −
 
  −
 
  −
===Proof===
  −
 
  −
===Proof===
  −
 
  −
证据
  −
 
  −
Let <math>g(x)</math> be a [[Normal distribution|Gaussian]] [[Probability density function|PDF]] with mean μ and variance <math>\sigma^2</math> and <math>f(x)</math> an arbitrary [[Probability density function|PDF]] with the same variance. Since differential entropy is translation invariant we can assume that <math>f(x)</math> has the same mean of <math>\mu</math> as <math>g(x)</math>.
  −
 
  −
Let <math>g(x)</math> be a Gaussian PDF with mean μ and variance <math>\sigma^2</math> and <math>f(x)</math> an arbitrary PDF with the same variance. Since differential entropy is translation invariant we can assume that <math>f(x)</math> has the same mean of <math>\mu</math> as <math>g(x)</math>.
  −
 
  −
设数学 g (x) / math 是一个高斯 PDF,其中包含均值和方差数学 sigma ^ 2 / math 和数学 f (x) / math,其中任意一个 PDF 具有相同的方差。由于微分熵是平移不变的,我们可以假设 math f (x) / math 与 math g (x) / math 具有相同的 math  mu / math 的平均值。
  −
 
  −
 
  −
 
  −
 
  −
 
  −
Consider the [[Kullback–Leibler divergence]] between the two distributions
  −
 
  −
Consider the Kullback–Leibler divergence between the two distributions
  −
 
  −
考虑两个分布之间的 Kullback-Leibler 散度
  −
 
  −
:<math> 0 \leq D_{KL}(f || g) = \int_{-\infty}^\infty f(x) \log \left( \frac{f(x)}{g(x)} \right) dx = -h(f) - \int_{-\infty}^\infty f(x)\log(g(x)) dx.</math>
  −
 
  −
<math> 0 \leq D_{KL}(f || g) = \int_{-\infty}^\infty f(x) \log \left( \frac{f(x)}{g(x)} \right) dx = -h(f) - \int_{-\infty}^\infty f(x)\log(g(x)) dx.</math>
  −
 
  −
数学0 leq d { KL }(f | g) int { infty } ^  infty f (x) log  left (frac { f (x)}右) dx-h (f)-int { infty ^ infty f (x) log (g (x)) dx / math
  −
 
  −
Now note that
  −
 
  −
Now note that
  −
 
  −
现在请注意
  −
 
  −
:<math>\begin{align}
  −
 
  −
<math>\begin{align}
  −
 
  −
数学 begin { align }
  −
 
  −
\int_{-\infty}^\infty f(x)\log(g(x)) dx &= \int_{-\infty}^\infty f(x)\log\left( \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\right) dx \\
  −
 
  −
\int_{-\infty}^\infty f(x)\log(g(x)) dx &= \int_{-\infty}^\infty f(x)\log\left( \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\right) dx \\
  −
 
  −
整数 f (x) log (g (x)) dx & 整数 f (x) log 左(frac {1}2 pi  sigma ^ 2} e-frac {(x-mu) ^ 2}右)
  −
 
  −
&= \int_{-\infty}^\infty f(x) \log\frac{1}{\sqrt{2\pi\sigma^2}} dx + \log(e)\int_{-\infty}^\infty f(x)\left( -\frac{(x-\mu)^2}{2\sigma^2}\right) dx \\
  −
 
  −
&= \int_{-\infty}^\infty f(x) \log\frac{1}{\sqrt{2\pi\sigma^2}} dx + \log(e)\int_{-\infty}^\infty f(x)\left( -\frac{(x-\mu)^2}{2\sigma^2}\right) dx \\
  −
 
  −
& int-infty ^ infty f (x) log frac {2 pi sigma ^ 2} dx + log (e) int-infty ^ infty f (x) left (- frac (x-mu) ^ 2 sigma ^ 2} dx
  −
 
  −
&= -\tfrac{1}{2}\log(2\pi\sigma^2) - \log(e)\frac{\sigma^2}{2\sigma^2} \\
  −
 
  −
&= -\tfrac{1}{2}\log(2\pi\sigma^2) - \log(e)\frac{\sigma^2}{2\sigma^2} \\
  −
 
  −
&  tfrac {1}{2} pi  sigma ^ 2)- log (e) frac {2 sigma ^ 2}
  −
 
  −
&= -\tfrac{1}{2}\left(\log(2\pi\sigma^2) + \log(e)\right) \\
  −
 
  −
&= -\tfrac{1}{2}\left(\log(2\pi\sigma^2) + \log(e)\right) \\
  −
 
  −
&  tfrac {2} log (2 pi  sigma ^ 2) +  log (e) right)
  −
 
  −
&= -\tfrac{1}{2}\log(2\pi e \sigma^2)  \\
  −
 
  −
&= -\tfrac{1}{2}\log(2\pi e \sigma^2)  \\
  −
 
  −
&  tfrac {2} log (2 pi e  sigma ^ 2)
  −
 
  −
&= -h(g)
  −
 
  −
&= -h(g)
  −
 
  −
&-h (g)
  −
 
  −
\end{align}</math>
  −
 
  −
\end{align}</math>
  −
 
  −
End { align } / math
  −
 
  −
because the result does not depend on <math>f(x)</math> other than through the variance.  Combining the two results yields
  −
 
  −
because the result does not depend on <math>f(x)</math> other than through the variance.  Combining the two results yields
  −
 
  −
因为除了方差之外,结果不依赖于数学 f (x) / math。结合这两个结果就会产生
  −
 
  −
:<math> h(g) - h(f) \geq 0 \!</math>
  −
 
  −
<math> h(g) - h(f) \geq 0 \!</math>
  −
 
  −
数学 h (g)-h (f) geq 0! / math
  −
 
  −
with equality when <math>f(x)=g(x)</math> following from the properties of Kullback–Leibler divergence.
  −
 
  −
with equality when <math>f(x)=g(x)</math> following from the properties of Kullback–Leibler divergence.
  −
 
  −
当数学 f (x) g (x) / math 遵循 Kullback-Leibler 散度的性质时。
  −
 
  −
 
  −
 
  −
 
  −
 
  −
===Alternative proof===
  −
 
  −
===Alternative proof===
  −
 
  −
替代证据
  −
 
  −
This result may also be demonstrated using the [[variational calculus]]. A Lagrangian function with two [[Lagrangian multiplier]]s may be defined as:
  −
 
  −
This result may also be demonstrated using the variational calculus. A Lagrangian function with two Lagrangian multipliers may be defined as:
  −
 
  −
这个结果也可以用变分演算来证明。具有两个拉格朗日乘数的拉格朗日函数可定义为:
  −
 
  −
 
  −
 
  −
 
  −
 
  −
:<math>L=\int_{-\infty}^\infty g(x)\ln(g(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty g(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>
  −
 
  −
<math>L=\int_{-\infty}^\infty g(x)\ln(g(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty g(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>
  −
 
  −
数学 lint (x) ^ infty g (x) ln (g (x)) ,dx-lambda 0左(1-int (x) ^ infty g (x) ,dx-right)-lambda 左(2-int (infty) infty g (x)(x-mu) ^ 2,dx-right) / math
  −
 
  −
 
  −
 
  −
 
  −
 
  −
where ''g(x)'' is some function with mean μ. When the entropy of ''g(x)'' is at a maximum and the constraint equations, which consist of the normalization condition <math>\left(1=\int_{-\infty}^\infty g(x)\,dx\right)</math> and the requirement of fixed variance <math>\left(\sigma^2=\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>, are both satisfied, then a small variation δ''g''(''x'') about ''g(x)'' will produce a variation δ''L'' about ''L'' which is equal to zero:
  −
 
  −
where g(x) is some function with mean μ. When the entropy of g(x) is at a maximum and the constraint equations, which consist of the normalization condition <math>\left(1=\int_{-\infty}^\infty g(x)\,dx\right)</math> and the requirement of fixed variance <math>\left(\sigma^2=\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>, are both satisfied, then a small variation δg(x) about g(x) will produce a variation δL about L which is equal to zero:
  −
 
  −
其中 g (x)是具有平均值的函数。当 g (x)的熵处于最大值时,满足归一化条件下的数学左(1int-infty) ^ infty g (x) ,dx 右) / math 和固定方差数学左(sigma ^ 2 infty) ^ 2(x-mu) ^ 2,dx 右) / math 要求,那么关于 g (x)的一个小变化 g (x)将产生关于 l 的一个变化 l,它等于零:
  −
 
  −
 
  −
 
  −
 
  −
 
  −
:<math>0=\delta L=\int_{-\infty}^\infty \delta g(x)\left (\ln(g(x))+1+\lambda_0+\lambda(x-\mu)^2\right )\,dx</math>
  −
 
  −
<math>0=\delta L=\int_{-\infty}^\infty \delta g(x)\left (\ln(g(x))+1+\lambda_0+\lambda(x-\mu)^2\right )\,dx</math>
  −
 
  −
数学0 delta l  int {- infty } ^  infty  delta g (x) left ( ln (g (x))) + 1 + lambda 0 +  lambda (x- mu) ^ 2 right) ,dx / math
  −
 
  −
 
  −
 
  −
 
  −
 
  −
Since this must hold for any small δ''g''(''x''), the term in brackets must be zero, and solving for ''g(x)'' yields:
  −
 
  −
Since this must hold for any small δg(x), the term in brackets must be zero, and solving for g(x) yields:
  −
 
  −
因为这对任何小 g (x)都成立,括号中的项必须为零,求 g (x)的结果是:
  −
 
  −
 
  −
 
  −
 
  −
 
  −
:<math>g(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2}</math>
  −
 
  −
<math>g(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2}</math>
  −
 
  −
数学 g (x) e ^ {- lambda 0-1- lambda (x- mu) ^ 2} / math
  −
 
  −
 
  −
 
  −
 
  −
 
  −
Using the constraint equations to solve for λ<sub>0</sub> and λ yields the normal distribution:
  −
 
  −
Using the constraint equations to solve for λ<sub>0</sub> and λ yields the normal distribution:
  −
 
  −
利用约束方程求解子0 / 子,得到正态分布:
  −
 
  −
 
  −
 
  −
 
  −
 
  −
:<math>g(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}</math>
  −
 
  −
<math>g(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}</math>
  −
 
  −
数学 g (x) frac {1}{2 pi  sigma ^ 2} e ^ {- frac {(x-mu) ^ 2}{2 sigma ^ 2}}} / math
  −
 
  −
 
  −
 
  −
 
  −
 
  −
==Example: Exponential distribution==
  −
 
  −
==Example: Exponential distribution==
  −
 
  −
例如: 指数分布
  −
 
  −
Let <math>X</math> be an [[exponential distribution|exponentially distributed]] random variable with parameter <math>\lambda</math>, that is, with probability density function
  −
 
  −
Let <math>X</math> be an exponentially distributed random variable with parameter <math>\lambda</math>, that is, with probability density function
  −
 
  −
设 math x / math 是一个具有参数 math  lambda / math 的指数分布随机变量,也就是说,具有概率密度函数
  −
 
  −
 
  −
 
  −
 
  −
 
  −
:<math>f(x) = \lambda e^{-\lambda x} \mbox{ for } x \geq 0.</math>
  −
 
  −
<math>f(x) = \lambda e^{-\lambda x} \mbox{ for } x \geq 0.</math>
  −
 
  −
数学 f (x) lambda e ^ { lambda x } mbox { for } x  geq 0. / math
  −
 
  −
 
  −
 
  −
 
  −
 
  −
Its differential entropy is then
  −
 
  −
Its differential entropy is then
  −
 
  −
它的微分熵就在那时
      
{|
 
{|
   −
{|
     −
{|
      
|-
 
|-
第589行: 第297行:  
|-
 
|-
   −
|-
+
===Proof===
    
| <math>h_e(X)\,</math>
 
| <math>h_e(X)\,</math>
   −
| <math>h_e(X)\,</math>
+
| < math > h _ e (x) </math >  
   −
| math h e (x) / math
+
Let <math>g(x)</math> be a [[Normal distribution|Gaussian]] [[Probability density function|PDF]] with mean μ and variance <math>\sigma^2</math> and <math>f(x)</math> an arbitrary [[Probability density function|PDF]] with the same variance. Since differential entropy is translation invariant we can assume that <math>f(x)</math> has the same mean of <math>\mu</math> as <math>g(x)</math>.
    
| <math>=-\int_0^\infty \lambda e^{-\lambda x} \log (\lambda e^{-\lambda x})\,dx</math>
 
| <math>=-\int_0^\infty \lambda e^{-\lambda x} \log (\lambda e^{-\lambda x})\,dx</math>
   −
| <math>=-\int_0^\infty \lambda e^{-\lambda x} \log (\lambda e^{-\lambda x})\,dx</math>
+
| < math > =-int _ 0 ^ infty lambda e ^ {-lambda x } log (lambda e ^ {-lambda x }) ,dx </math >  
   −
| math- nint 0 ^  infty  lambda e ^ {- lambda x } log ( lambda ^-lambda x }) ,dx / math
     −
|-
      
|-
 
|-
第609行: 第315行:  
|-
 
|-
   −
|
+
Consider the [[Kullback–Leibler divergence]] between the two distributions
    
|
 
|
第615行: 第321行:  
|
 
|
   −
| <math>= -\left(\int_0^\infty (\log \lambda)\lambda e^{-\lambda x}\,dx + \int_0^\infty (-\lambda x) \lambda e^{-\lambda x}\,dx\right) </math>
+
:<math> 0 \leq D_{KL}(f || g) = \int_{-\infty}^\infty f(x) \log \left( \frac{f(x)}{g(x)} \right) dx = -h(f) - \int_{-\infty}^\infty f(x)\log(g(x)) dx.</math>
    
| <math>=  -\left(\int_0^\infty (\log \lambda)\lambda e^{-\lambda x}\,dx + \int_0^\infty (-\lambda x) \lambda e^{-\lambda x}\,dx\right) </math>
 
| <math>=  -\left(\int_0^\infty (\log \lambda)\lambda e^{-\lambda x}\,dx + \int_0^\infty (-\lambda x) \lambda e^{-\lambda x}\,dx\right) </math>
   −
| math- left ( int 0 ^ infty ( log lambda) lambda e ^ { lambda } ,dx + int 0 ^ infty (- lambda x) lambda ^ { lambda x } ,dx right) / math
+
| < math > =-left (int _ 0 ^ infty (log lambda) lambda e ^ {-lambda x } ,dx + int _ 0 ^ infty (- lambda x) lambda e ^ {-lambda x } ,dx right) </math >
   −
|-
+
Now note that
    
|-
 
|-
第627行: 第333行:  
|-
 
|-
   −
|
+
:<math>\begin{align}
    
|
 
|
第633行: 第339行:  
|
 
|
   −
| <math>= -\log \lambda \int_0^\infty f(x)\,dx + \lambda E[X]</math>
+
\int_{-\infty}^\infty f(x)\log(g(x)) dx &= \int_{-\infty}^\infty f(x)\log\left( \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\right) dx \\
    
| <math>= -\log \lambda \int_0^\infty f(x)\,dx + \lambda E[X]</math>
 
| <math>= -\log \lambda \int_0^\infty f(x)\,dx + \lambda E[X]</math>
   −
| math- log lambda int 0 ^ infty f (x) ,dx + lambda e [ x ] / math
+
| < math > =-log lambda int _ 0 ^ infty f (x) ,dx + lambda e [ x ] </math >
   −
|-
+
&= \int_{-\infty}^\infty f(x) \log\frac{1}{\sqrt{2\pi\sigma^2}} dx + \log(e)\int_{-\infty}^\infty f(x)\left( -\frac{(x-\mu)^2}{2\sigma^2}\right) dx \\
    
|-
 
|-
第645行: 第351行:  
|-
 
|-
   −
|
+
&= -\tfrac{1}{2}\log(2\pi\sigma^2) - \log(e)\frac{\sigma^2}{2\sigma^2} \\
    
|
 
|
第651行: 第357行:  
|
 
|
   −
| <math>= -\log\lambda + 1\,.</math>
+
&= -\tfrac{1}{2}\left(\log(2\pi\sigma^2) + \log(e)\right) \\
    
| <math>= -\log\lambda + 1\,.</math>
 
| <math>= -\log\lambda + 1\,.</math>
   −
| math- log lambda + 1. / math
+
| < math > =-log lambda + 1,. </math >
   −
|}
+
&= -\tfrac{1}{2}\log(2\pi e \sigma^2)  \\
    
|}
 
|}
第663行: 第369行:  
|}
 
|}
    +
&= -h(g)
   −
 
+
\end{align}</math>
 
  −
 
  −
Here, <math>h_e(X)</math> was used rather than <math>h(X)</math> to make it explicit that the logarithm was taken to base ''e'', to simplify the calculation.
      
Here, <math>h_e(X)</math> was used rather than <math>h(X)</math> to make it explicit that the logarithm was taken to base e, to simplify the calculation.
 
Here, <math>h_e(X)</math> was used rather than <math>h(X)</math> to make it explicit that the logarithm was taken to base e, to simplify the calculation.
   −
在这里,使用了数学 h (x) / math 而不是数学 h (x) / math 来明确对数是以 e 为底的,以简化计算。
+
在这里,使用 < math > h _ e (x) </math > 而不是 < math > h (x) </math > 来明确对数是以 e 为底,以简化计算。
    +
because the result does not depend on <math>f(x)</math> other than through the variance.  Combining the two results yields
    +
:<math> h(g) - h(f) \geq 0 \!</math>
    +
with equality when <math>f(x)=g(x)</math> following from the properties of Kullback–Leibler divergence.
    +
The differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable <math>X</math> and estimator <math>\widehat{X}</math> the following holds:
   −
==Relation to estimator error==
+
对于估计量的预期平方误差,微分熵产生一个下限。对于任何随机变量 < math > x </math > 和估计量 < math > widedhat { x } </math > 下面的值:
 
  −
==Relation to estimator error==
     −
与估计误差的关系
     −
The differential entropy yields a lower bound on the expected squared error of an [[estimator]]. For any random variable <math>X</math> and estimator <math>\widehat{X}</math> the following holds:<ref name="cover_thomas" />
  −
  −
The differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable <math>X</math> and estimator <math>\widehat{X}</math> the following holds:
  −
  −
微分熵对估计量的预期平方误差产生了一个下限。对于任何随机变量的数学 x / math 和估计量 math  widehat { x } / math,有以下几点:
  −
  −
:<math>\operatorname{E}[(X - \widehat{X})^2] \ge \frac{1}{2\pi e}e^{2h(X)}</math>
      
<math>\operatorname{E}[(X - \widehat{X})^2] \ge \frac{1}{2\pi e}e^{2h(X)}</math>
 
<math>\operatorname{E}[(X - \widehat{X})^2] \ge \frac{1}{2\pi e}e^{2h(X)}</math>
   −
数学运算器名称{ e }[(x- widehat { x }) ^ 2] ge frac {1}{2 pi e } e ^ {2h (x)} / math
+
(x-widehat { x }) ^ 2] ge frac {1}{2 pi e } e ^ {2 h (x)} </math >
   −
with equality if and only if <math>X</math> is a Gaussian random variable and <math>\widehat{X}</math> is the mean of <math>X</math>.
+
===Alternative proof===
    
with equality if and only if <math>X</math> is a Gaussian random variable and <math>\widehat{X}</math> is the mean of <math>X</math>.
 
with equality if and only if <math>X</math> is a Gaussian random variable and <math>\widehat{X}</math> is the mean of <math>X</math>.
   −
当且仅当数学 x / math 是高斯型随机变量,数学广义数学是数学 x / math 的平均值。
+
当且仅当 < math > x </math > 是一个 Gaussian 随机变量,而 < math > x } </math > 是 < math > x </math > 的平均值。
    +
This result may also be demonstrated using the [[variational calculus]]. A Lagrangian function with two [[Lagrangian multiplier]]s may be defined as:
          +
:<math>L=\int_{-\infty}^\infty g(x)\ln(g(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty g(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>
   −
==Differential entropies for various distributions==
+
In the table below <math>\Gamma(x) = \int_0^{\infty} e^{-t} t^{x-1} dt</math> is the gamma function, <math>\psi(x) = \frac{d}{dx} \ln\Gamma(x)=\frac{\Gamma'(x)}{\Gamma(x)}</math> is the digamma function, <math>B(p,q) = \frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}</math> is the beta function, and γ<sub>E</sub> is Euler's constant.<math>- (\beta-1)[\psi(\beta) - \psi(\alpha + \beta)] \, </math>||<math>[0,1]\,</math>
 
  −
==Differential entropies for various distributions==
     −
各种分布的微分熵
+
在下面的表格中,Gamma (x) = int _ 0 ^ { infty } e ^ {-t } t ^ { x-1} dt </math > 是 Gamma 函数,{ math > psi (x) = frac { d }{ dx } ln Gamma (x) = frac { Gamma’(x)}{ Gamma (x)} </math > 是双伽玛函数,b (p,q) = frac { Gamma (p) Gamma (q)}{ Gamma (p + q)} </math > 是 β 函数,γ < sub > e </sub > 是欧拉常数。[ math ]-(beta-1)[ psi (beta)-psi (alpha + beta)] | | < math > [0,1] ,</math >
   −
In the table below <math>\Gamma(x) = \int_0^{\infty} e^{-t} t^{x-1} dt</math> is the [[gamma function]], <math>\psi(x) = \frac{d}{dx} \ln\Gamma(x)=\frac{\Gamma'(x)}{\Gamma(x)}</math> is the [[digamma function]], <math>B(p,q) = \frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}</math> is the [[beta function]], and γ<sub>''E''</sub> is [[Euler-Mascheroni constant|Euler's constant]].<ref>{{cite journal |last1=Park |first1=Sung Y. |last2=Bera |first2=Anil K. |year=2009 |title=Maximum entropy autoregressive conditional heteroskedasticity model |journal=Journal of Econometrics |publisher=Elsevier |url=http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf |accessdate=2011-06-02 |archive-url=https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf |archive-date=2016-03-07 |url-status=dead }}</ref>{{rp|219–230}}
     −
In the table below <math>\Gamma(x) = \int_0^{\infty} e^{-t} t^{x-1} dt</math> is the gamma function, <math>\psi(x) = \frac{d}{dx} \ln\Gamma(x)=\frac{\Gamma'(x)}{\Gamma(x)}</math> is the digamma function, <math>B(p,q) = \frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}</math> is the beta function, and γ<sub>E</sub> is Euler's constant.
  −
  −
在下表中,Gamma (x) int 0 ^ { infty } e ^ {-t } t ^ { x-1} dt / math 是 Gamma 函数, 数学(x) frac { d } ln  Gamma (x) frac { γ’(x)} / math 是双伽玛函数, 数学 b (p,q) frac { Gamma (p) Gamma (q)}{ Gamma (p + q)}} / math 是 beta 函数,子 e / sub 是欧拉常数。
  −
  −
{| class="wikitable" style="background:white"
  −
  −
{| class="wikitable" style="background:white"
  −
  −
{ | class“ wikitable”样式“ background: white”
  −
  −
|+ Table of differential entropies
  −
  −
|+ Table of differential entropies
  −
  −
| + 微分熵表
      
|-
 
|-
第733行: 第415行:  
|-
 
|-
   −
|-
+
where ''g(x)'' is some function with mean μ. When the entropy of ''g(x)'' is at a maximum and the constraint equations, which consist of the normalization condition <math>\left(1=\int_{-\infty}^\infty g(x)\,dx\right)</math> and the requirement of fixed variance <math>\left(\sigma^2=\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>, are both satisfied, then a small variation δ''g''(''x'') about ''g(x)'' will produce a variation δ''L'' about ''L'' which is equal to zero:
   −
! Distribution Name !! Probability density function (pdf) !! Entropy in [[Nat (unit)|nat]]s || Support
+
| Cauchy || <math>f(x) = \frac{\gamma}{\pi} \frac{1}{\gamma^2 + x^2}</math> || <math>\ln(4\pi\gamma) \, </math>||<math>(-\infty,\infty)\,</math>
   −
! Distribution Name !! Probability density function (pdf) !! Entropy in nats || Support
+
| Cauchy | | < math > f (x) = frac { gamma }{ pi }{ pi ^ 2 + x ^ 2} </math > | < math > ln (4pi gamma) ,</math > | < math > (- infty,infty) ,</math >
   −
!发行名称! !概率密度函数(pdf)!Nats 中的熵 | 支持
     −
|-
      
|-
 
|-
第747行: 第427行:  
|-
 
|-
   −
| [[Uniform distribution (continuous)|Uniform]] || <math>f(x) = \frac{1}{b-a}</math> || <math>\ln(b - a) \,</math> ||<math>[a,b]\,</math>
+
:<math>0=\delta L=\int_{-\infty}^\infty \delta g(x)\left (\ln(g(x))+1+\lambda_0+\lambda(x-\mu)^2\right )\,dx</math>
   −
| Uniform || <math>f(x) = \frac{1}{b-a}</math> || <math>\ln(b - a) \,</math> ||<math>[a,b]\,</math>
+
| Chi || <math>f(x) = \frac{2}{2^{k/2}  \Gamma(k/2)} x^{k-1} \exp\left(-\frac{x^2}{2}\right)</math> || <math>\ln{\frac{\Gamma(k/2)}{\sqrt{2}}} - \frac{k-1}{2} \psi\left(\frac{k}{2}\right) + \frac{k}{2}</math>||<math>[0,\infty)\,</math>
   −
| Uniform | math f (x) frac {1}{ b-a } / math | math ln (b-a) / math | math [ a,b ] ,/ math
+
| Chi | | < math > f (x) = frac {2}{2 ^ { k/2} Gamma (k/2)}} x ^ { k-1} exp left (- frac { x ^ 2}{2}右) </math > | < math > ln { frac {(k/2)}}}{2}}}-frac {2} psi (frac { k }{2}右) + frac {2} </math > | | math > [0,infty) </math >
   −
|-
     −
|-
      
|-
 
|-
  −
| [[Normal distribution|Normal]] || <math>f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)</math> || <math>\ln\left(\sigma\sqrt{2\,\pi\,e}\right) </math>||<math>(-\infty,\infty)\,</math>
  −
  −
| Normal || <math>f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)</math> || <math>\ln\left(\sigma\sqrt{2\,\pi\,e}\right) </math>||<math>(-\infty,\infty)\,</math>
  −
  −
| 正规 | 数学 f (x) frac {1} | | | | 数学 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | (- frac {(x-mu) ^ 2}右) ,/ / 数学,数学
      
|-
 
|-
   −
|-
+
Since this must hold for any small δ''g''(''x''), the term in brackets must be zero, and solving for ''g(x)'' yields:
   −
|-
+
| Chi-squared || <math>f(x) = \frac{1}{2^{k/2} \Gamma(k/2)} x^{\frac{k}{2}\!-\!1} \exp\left(-\frac{x}{2}\right)</math> || <math>\ln 2\Gamma\left(\frac{k}{2}\right) - \left(1 - \frac{k}{2}\right)\psi\left(\frac{k}{2}\right) + \frac{k}{2}</math>||<math>[0,\infty)\,</math>
   −
| [[Exponential distribution|Exponential]] || <math>f(x) = \lambda \exp\left(-\lambda x\right)</math> || <math>1 - \ln \lambda \, </math>||<math>[0,\infty)\,</math>
+
| Chi-squared | < math > f (x) = frac {1}{2 ^ { k/2} Gamma (k/2)} x ^ { frac { k }{2} !-! 1} exp left (- frac { x }{2}右) </math > | < math > | < math > ln 2 Gamma left (frac { k }{2}右)-left (1-frac { k }{2}右)左(frac { k }2}右) + c { k {2}{ infmath | < < math > [0,fraty) </math >  
   −
| Exponential || <math>f(x) = \lambda \exp\left(-\lambda x\right)</math> || <math>1 - \ln \lambda \, </math>||<math>[0,\infty)\,</math>
     −
| 指数 | | math f (x) lambda  exp  left (-  lambda x  right) / math | math 1- ln  lambda  ,/ math | math [0, infty) ,/ math
      
|-
 
|-
第781行: 第451行:  
|-
 
|-
   −
|-
+
:<math>g(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2}</math>
   −
| [[Rayleigh distribution|Rayleigh]] || <math>f(x) = \frac{x}{\sigma^2} \exp\left(-\frac{x^2}{2\sigma^2}\right)</math> || <math>1 + \ln \frac{\sigma}{\sqrt{2}} + \frac{\gamma_E}{2}</math>||<math>[0,\infty)\,</math>
+
| Erlang || <math>f(x) = \frac{\lambda^k}{(k-1)!} x^{k-1} \exp(-\lambda x)</math> || <math>(1-k)\psi(k) + \ln \frac{\Gamma(k)}{\lambda} + k</math>||<math>[0,\infty)\,</math>
   −
| Rayleigh || <math>f(x) = \frac{x}{\sigma^2} \exp\left(-\frac{x^2}{2\sigma^2}\right)</math> || <math>1 + \ln \frac{\sigma}{\sqrt{2}} + \frac{\gamma_E}{2}</math>||<math>[0,\infty)\,</math>
+
| Erlang | | < math > f (x) = frac { lambda ^ k }{(k-1) ! }X ^ { k-1} exp (- lambda x) </math > | < math > (1-k) psi (k) + ln frac { Gamma (k)}{ lambda } + k </math > | < math > [0,infty ] ,</math >  
   −
| Rayleigh | math f (x) frac { x } sigma ^ 2} exp 左(- frac { x ^ 2}右) / math | math 1 + ln frac { sigma } + frac {2} | math | math [0,infty) ,/ math
     −
|-
      
|-
 
|-
第795行: 第463行:  
|-
 
|-
   −
| [[Beta distribution|Beta]] || <math>f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}</math> for <math>0 \leq x \leq 1</math> || <math> \ln B(\alpha,\beta) - (\alpha-1)[\psi(\alpha) - \psi(\alpha +\beta)]\,</math><br /><math>- (\beta-1)[\psi(\beta) - \psi(\alpha + \beta)] \, </math>||<math>[0,1]\,</math>
+
Using the constraint equations to solve for λ<sub>0</sub> and λ yields the normal distribution:
   −
| Beta || <math>f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}</math> for <math>0 \leq x \leq 1</math> || <math> \ln B(\alpha,\beta) - (\alpha-1)[\psi(\alpha) - \psi(\alpha +\beta)]\,</math><br /><math>- (\beta-1)[\psi(\beta) - \psi(\alpha + \beta)] \, </math>||<math>[0,1]\,</math>
+
| F || <math>f(x) = \frac{n_1^{\frac{n_1}{2}} n_2^{\frac{n_2}{2}}}{B(\frac{n_1}{2},\frac{n_2}{2})} \frac{x^{\frac{n_1}{2} - 1}}{(n_2 + n_1 x)^{\frac{n_1 + n2}{2}}}</math> || <math>\ln \frac{n_1}{n_2} B\left(\frac{n_1}{2},\frac{n_2}{2}\right) + \left(1 - \frac{n_1}{2}\right) \psi\left(\frac{n_1}{2}\right) -</math><br /><math>\left(1 + \frac{n_2}{2}\right)\psi\left(\frac{n_2}{2}\right) + \frac{n_1 + n_2}{2} \psi\left(\frac{n_1\!+\!n_2}{2}\right)</math>||<math>[0,\infty)\,</math>
   −
| Beta | math f (x) frac { x ^ alpha-1}(1-x) ^ beta-1}{ b ( alpha, beta)} / math 0 leq x leq 1 / math | | math  ln b ( alpha, beta)-( alpha-1)[ alpha ( alpha)- psi ( alpha + beta)] ,/ math br / math-( beta-1)[ psi ( alpha +  beta)] ,/ math 数学[0,1]数学
+
我们会找到你的| | < math > f (x) = frac{ n _ 1 ^ { frac { n _ 1}{2}{ frac { n _ 2}{2}}{ b (frac { n _ 1}{2} ,frac { n _ 2}{2}}}}}} frac { x ^ { frac { n _ 1}{2}-1}{(n _ 2 + n _ 1 x) ^ { frac { n _ 1 + n _ 2}{2}}{2}{2}} </} </math > | | | (frac { n _ 1}{ n _ 2} b left (frac { n _ 1}{2} ,2}{2}{2}{2}{2}{2}{1}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{1}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2}{2} psi (frac { n _ 1!+\![0,infty) ,</math > | < math >
   −
|-
     −
|-
      
|-
 
|-
  −
| [[Cauchy distribution|Cauchy]] || <math>f(x) = \frac{\gamma}{\pi} \frac{1}{\gamma^2 + x^2}</math> || <math>\ln(4\pi\gamma) \, </math>||<math>(-\infty,\infty)\,</math>
  −
  −
| Cauchy || <math>f(x) = \frac{\gamma}{\pi} \frac{1}{\gamma^2 + x^2}</math> || <math>\ln(4\pi\gamma) \, </math>||<math>(-\infty,\infty)\,</math>
  −
  −
| Cauchy | | math f (x) frac {1} gamma ^ 2 + x ^ 2} / math | math | ln (4 pi  gamma) ,/ math | math (- infty,infty) ,/ math
      
|-
 
|-
   −
|-
+
:<math>g(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}</math>
   −
|-
+
| Gamma || <math>f(x) = \frac{x^{k - 1} \exp(-\frac{x}{\theta})}{\theta^k \Gamma(k)}</math> || <math>\ln(\theta \Gamma(k)) + (1 - k)\psi(k) + k \, </math>||<math>[0,\infty)\,</math>
   −
| [[Chi distribution|Chi]] || <math>f(x) = \frac{2}{2^{k/2}  \Gamma(k/2)} x^{k-1} \exp\left(-\frac{x^2}{2}\right)</math> || <math>\ln{\frac{\Gamma(k/2)}{\sqrt{2}}} - \frac{k-1}{2} \psi\left(\frac{k}{2}\right) + \frac{k}{2}</math>||<math>[0,\infty)\,</math>
+
| Gamma | | < math > f (x) = frac { x ^ { k-1} exp (- frac { x }{ theta })}{ theta ^ k Gamma (k)} </math > | < math > ln (theta Gamma (k)) + (1-k) psi (k) + k,</math > | < math > [0,infty) </math >  
   −
| Chi || <math>f(x) = \frac{2}{2^{k/2}  \Gamma(k/2)} x^{k-1} \exp\left(-\frac{x^2}{2}\right)</math> || <math>\ln{\frac{\Gamma(k/2)}{\sqrt{2}}} - \frac{k-1}{2} \psi\left(\frac{k}{2}\right) + \frac{k}{2}</math>||<math>[0,\infty)\,</math>
     −
| Chi | | math f (x) frac {2}{2 ^ { k / 2} Gamma (k / 2)}{ x ^ { k-1} exp  left (- frac ^ 2}{右) / math | math  ln Gamma (k / 2)}}-frac {2}-frac {2}-1}左(frac {2}右) + c {2} | inf| math [0,math)) ,/ math
      
|-
 
|-
第829行: 第487行:  
|-
 
|-
   −
|-
+
==Example: Exponential distribution==
   −
| [[Chi-squared distribution|Chi-squared]] || <math>f(x) = \frac{1}{2^{k/2} \Gamma(k/2)} x^{\frac{k}{2}\!-\!1} \exp\left(-\frac{x}{2}\right)</math> || <math>\ln 2\Gamma\left(\frac{k}{2}\right) - \left(1 - \frac{k}{2}\right)\psi\left(\frac{k}{2}\right) + \frac{k}{2}</math>||<math>[0,\infty)\,</math>
+
| Laplace || <math>f(x) = \frac{1}{2b} \exp\left(-\frac{|x - \mu|}{b}\right)</math> || <math>1 + \ln(2b) \, </math>||<math>(-\infty,\infty)\,</math>
   −
| Chi-squared || <math>f(x) = \frac{1}{2^{k/2} \Gamma(k/2)} x^{\frac{k}{2}\!-\!1} \exp\left(-\frac{x}{2}\right)</math> || <math>\ln 2\Gamma\left(\frac{k}{2}\right) - \left(1 - \frac{k}{2}\right)\psi\left(\frac{k}{2}\right) + \frac{k}{2}</math>||<math>[0,\infty)\,</math>
+
| Laplace | | < math > f (x) = frac {1}{2b } exp left (- frac { | x-mu | }{ b } right) </math > | < math > 1 + ln (2b) </math > | < math > (- infty,infty) </math >  
   −
| Chi-squared | math f (x) frac {1} | Gamma (k / 2)} x ^ frac {2} !-1} exp 左(- frac {2}右) / math | math | ln 2 | Gamma 左(frac {2}右)-左(1-frac {2}左) psi (k {2}右) + frac {2} | math [0,infty) ,/
+
Let <math>X</math> be an [[exponential distribution|exponentially distributed]] random variable with parameter <math>\lambda</math>, that is, with probability density function
    
|-
 
|-
第841行: 第499行:  
|-
 
|-
   −
|-
     −
| [[Erlang distribution|Erlang]] || <math>f(x) = \frac{\lambda^k}{(k-1)!} x^{k-1} \exp(-\lambda x)</math> || <math>(1-k)\psi(k) + \ln \frac{\Gamma(k)}{\lambda} + k</math>||<math>[0,\infty)\,</math>
     −
| Erlang || <math>f(x) = \frac{\lambda^k}{(k-1)!} x^{k-1} \exp(-\lambda x)</math> || <math>(1-k)\psi(k) + \ln \frac{\Gamma(k)}{\lambda} + k</math>||<math>[0,\infty)\,</math>
+
| Logistic || <math>f(x) = \frac{e^{-x}}{(1 + e^{-x})^2}</math> || <math>2 \, </math>||<math>(-\infty,\infty)\,</math>
   −
| Erlang | | math f (x) frac { lambda ^ k }{(k-1) ! }X ^ { k-1} exp (-  lambda x) / math | math (1-k) psi (k) + ln frac { Gamma (k)} + k / math | math [0,infty) ,/ math
+
| Logistic | | < math > f (x) = frac { e ^ {-x }{(1 + e ^ {-x }) ^ 2} </math > | < math > 2,</math > | < math > (- infty,infty) ,</math >
   −
|-
+
:<math>f(x) = \lambda e^{-\lambda x} \mbox{ for } x \geq 0.</math>
 
  −
|-
  −
 
  −
|-
  −
 
  −
| [[F distribution|F]] || <math>f(x) = \frac{n_1^{\frac{n_1}{2}} n_2^{\frac{n_2}{2}}}{B(\frac{n_1}{2},\frac{n_2}{2})} \frac{x^{\frac{n_1}{2} - 1}}{(n_2 + n_1 x)^{\frac{n_1 + n2}{2}}}</math> || <math>\ln \frac{n_1}{n_2} B\left(\frac{n_1}{2},\frac{n_2}{2}\right) + \left(1 - \frac{n_1}{2}\right) \psi\left(\frac{n_1}{2}\right) -</math><br /><math>\left(1 + \frac{n_2}{2}\right)\psi\left(\frac{n_2}{2}\right) + \frac{n_1 + n_2}{2} \psi\left(\frac{n_1\!+\!n_2}{2}\right)</math>||<math>[0,\infty)\,</math>
  −
 
  −
| F || <math>f(x) = \frac{n_1^{\frac{n_1}{2}} n_2^{\frac{n_2}{2}}}{B(\frac{n_1}{2},\frac{n_2}{2})} \frac{x^{\frac{n_1}{2} - 1}}{(n_2 + n_1 x)^{\frac{n_1 + n2}{2}}}</math> || <math>\ln \frac{n_1}{n_2} B\left(\frac{n_1}{2},\frac{n_2}{2}\right) + \left(1 - \frac{n_1}{2}\right) \psi\left(\frac{n_1}{2}\right) -</math><br /><math>\left(1 + \frac{n_2}{2}\right)\psi\left(\frac{n_2}{2}\right) + \frac{n_1 + n_2}{2} \psi\left(\frac{n_1\!+\!n_2}{2}\right)</math>||<math>[0,\infty)\,</math>
  −
 
  −
| f | math f (x) frac { n 1 ^ { n 1}{ n 2}{ n 2}}{ b ( frac { n 1}{2} , frac { n 2}{2}}}}}}}}{ frac { x ^ { n 1}{ n 1}-1}{(n 2 + n 1) ^ { frac { n 1 + n 2}}{2}}}}{数学 | | |  数学 ln  frac { n 1}{ n 2} b 左( frac { n 1}{2} , (1-frac { n 1}右) psi left ( frac { n 1}右)-/ math br / math  left (1 + frac { n 2}右) psi left ( frac { n 2}右) + frac { n 1 + 2}{2} {2} psi  left ( frac { n 1! +\! 数学 | math [0,infty ] ,math
  −
 
  −
|-
  −
 
  −
|-
  −
 
  −
|-
  −
 
  −
| [[Gamma distribution|Gamma]] || <math>f(x) = \frac{x^{k - 1} \exp(-\frac{x}{\theta})}{\theta^k \Gamma(k)}</math> || <math>\ln(\theta \Gamma(k)) + (1 - k)\psi(k) + k \, </math>||<math>[0,\infty)\,</math>
  −
 
  −
| Gamma || <math>f(x) = \frac{x^{k - 1} \exp(-\frac{x}{\theta})}{\theta^k \Gamma(k)}</math> || <math>\ln(\theta \Gamma(k)) + (1 - k)\psi(k) + k \, </math>||<math>[0,\infty)\,</math>
  −
 
  −
| Gamma | math f (x) frac { x ^ { k-1} exp (- frac { x } theta ^ k  Gamma (k)} / math | math  ln (theta  Gamma (k)) + (1-k) psi (k) + k,/ math | math [0,infty) ,/ math
  −
 
  −
|-
  −
 
  −
|-
  −
 
  −
|-
  −
 
  −
| [[Laplace distribution|Laplace]] || <math>f(x) = \frac{1}{2b} \exp\left(-\frac{|x - \mu|}{b}\right)</math> || <math>1 + \ln(2b) \, </math>||<math>(-\infty,\infty)\,</math>
  −
 
  −
| Laplace || <math>f(x) = \frac{1}{2b} \exp\left(-\frac{|x - \mu|}{b}\right)</math> || <math>1 + \ln(2b) \, </math>||<math>(-\infty,\infty)\,</math>
  −
 
  −
| Laplace | | math f (x) frac {1} exp 左(-  frac { | x- mu | }{ b 右) / math | math 1 +  ln (2b) ,/ math | math | math (-  infty  infty) ,/ math
      
|-
 
|-
第889行: 第511行:  
|-
 
|-
   −
|-
  −
  −
| [[Logistic distribution|Logistic]] || <math>f(x) = \frac{e^{-x}}{(1 + e^{-x})^2}</math> || <math>2 \, </math>||<math>(-\infty,\infty)\,</math>
  −
  −
| Logistic || <math>f(x) = \frac{e^{-x}}{(1 + e^{-x})^2}</math> || <math>2 \, </math>||<math>(-\infty,\infty)\,</math>
  −
  −
| 逻辑 | | math f (x) frac { e ^ {-x }) ^ 2} / math | math 2,/ math | math (-  infty, infty) ,/ math
     −
|-
  −
  −
|-
  −
  −
|-
  −
  −
| [[Log-normal distribution|Lognormal]] || <math>f(x) = \frac{1}{\sigma x \sqrt{2\pi}} \exp\left(-\frac{(\ln x - \mu)^2}{2\sigma^2}\right)</math> || <math>\mu + \frac{1}{2} \ln(2\pi e \sigma^2)</math>||<math>[0,\infty)\,</math>
      
| Lognormal || <math>f(x) = \frac{1}{\sigma x \sqrt{2\pi}} \exp\left(-\frac{(\ln x - \mu)^2}{2\sigma^2}\right)</math> || <math>\mu + \frac{1}{2} \ln(2\pi e \sigma^2)</math>||<math>[0,\infty)\,</math>
 
| Lognormal || <math>f(x) = \frac{1}{\sigma x \sqrt{2\pi}} \exp\left(-\frac{(\ln x - \mu)^2}{2\sigma^2}\right)</math> || <math>\mu + \frac{1}{2} \ln(2\pi e \sigma^2)</math>||<math>[0,\infty)\,</math>
   −
| Lognormal | math f (x) frac {1} | sigma x } sqrt  left (- frac {(x-mu) ^ 2}{2 sigma ^ 2}) / math | math mu + frac {1} ln (2 pi e ^ 2) / math | math | math | math [0,infty) ,/ math
+
| Lognormal | < math > f (x) = frac {1}{ sigma x sqrt {2 pi } exp left (- frac {(ln x-mu) ^ 2}{2 sigma ^ 2} right) </math > | < math > mu + frac {1}{2} ln (2 pi e sigma ^ 2) </math > | < math > [0,infty) ,</math >
   −
|-
+
Its differential entropy is then
    
|-
 
|-
第915行: 第523行:  
|-
 
|-
   −
| [[Maxwell–Boltzmann distribution|Maxwell–Boltzmann]] || <math>f(x) = \frac{1}{a^3}\sqrt{\frac{2}{\pi}}\,x^{2}\exp\left(-\frac{x^2}{2a^2}\right)</math> || <math>\ln(a\sqrt{2\pi})+\gamma_E-\frac{1}{2}</math>||<math>[0,\infty)\,</math>
+
{|
    
| Maxwell–Boltzmann || <math>f(x) = \frac{1}{a^3}\sqrt{\frac{2}{\pi}}\,x^{2}\exp\left(-\frac{x^2}{2a^2}\right)</math> || <math>\ln(a\sqrt{2\pi})+\gamma_E-\frac{1}{2}</math>||<math>[0,\infty)\,</math>
 
| Maxwell–Boltzmann || <math>f(x) = \frac{1}{a^3}\sqrt{\frac{2}{\pi}}\,x^{2}\exp\left(-\frac{x^2}{2a^2}\right)</math> || <math>\ln(a\sqrt{2\pi})+\gamma_E-\frac{1}{2}</math>||<math>[0,\infty)\,</math>
   −
| Maxwell-Boltzmann | | math f (x) frac {1} a ^ 3} frac {2} ,x ^ 2} exp (- frac ^ 2 ^ 2) / math | math ln (a ^ 2 pi) + gamma-frac | math | 0,infty) ,/ math
+
| Maxwell-Boltzmann | | < math > f (x) = frac {1}{ a ^ 3}{ frac {2}{ pi } ,x ^ {2} exp left (- frac { x ^ 2}{2a ^ 2}右) </math > | < math > ln (a sqrt {2 pi }) + e-frac {1} </math > | | math < 0,infty) ,</math >
    
|-
 
|-
第927行: 第535行:  
|-
 
|-
   −
| [[Generalized Gaussian distribution|Generalized normal]] || <math>f(x) = \frac{2 \beta^{\frac{\alpha}{2}}}{\Gamma(\frac{\alpha}{2})} x^{\alpha - 1} \exp(-\beta x^2)</math> || <math>\ln{\frac{\Gamma(\alpha/2)}{2\beta^{\frac{1}{2}}}} - \frac{\alpha - 1}{2} \psi\left(\frac{\alpha}{2}\right) + \frac{\alpha}{2}</math>||<math>(-\infty,\infty)\,</math>
+
| <math>h_e(X)\,</math>
    
| Generalized normal || <math>f(x) = \frac{2 \beta^{\frac{\alpha}{2}}}{\Gamma(\frac{\alpha}{2})} x^{\alpha - 1} \exp(-\beta x^2)</math> || <math>\ln{\frac{\Gamma(\alpha/2)}{2\beta^{\frac{1}{2}}}} - \frac{\alpha - 1}{2} \psi\left(\frac{\alpha}{2}\right) + \frac{\alpha}{2}</math>||<math>(-\infty,\infty)\,</math>
 
| Generalized normal || <math>f(x) = \frac{2 \beta^{\frac{\alpha}{2}}}{\Gamma(\frac{\alpha}{2})} x^{\alpha - 1} \exp(-\beta x^2)</math> || <math>\ln{\frac{\Gamma(\alpha/2)}{2\beta^{\frac{1}{2}}}} - \frac{\alpha - 1}{2} \psi\left(\frac{\alpha}{2}\right) + \frac{\alpha}{2}</math>||<math>(-\infty,\infty)\,</math>
   −
| 广义正态 | math f (x) frac {2 beta ^ { frac { alpha }{2}{ frac {2}}}{ x ^ { alpha-1} exp (- beta x ^ 2) / math | 数学 ln frac { Gamma ( alpha / 2)}{2 beta ^ { frac {1}}- frac {2} alpha-1}{2} psi ( frac {2}) + frac {2} {2} / 数学 数学-数学,数学
+
| 广义正态| | < math > f (x) = frac{2 beta ^ { frac { alpha }{2}{ Gamma (frac { alpha }{2})} x ^ { alpha-1} exp (- beta x ^ 2) </math > | | < math > ln { frac { Gamma (alpha/2)}{2 beta ^ { frac {1}{2}}}}}-frac { alpha-1}{2} psi left (frac { alpha }{2} right) + frac { alpha }{2}}{2}| | < math > (- infty,infty) ,</math >
 +
 
 +
| <math>=-\int_0^\infty \lambda e^{-\lambda x} \log (\lambda e^{-\lambda x})\,dx</math>
    
|-
 
|-
第938行: 第548行:     
|-
 
|-
  −
| [[Pareto distribution|Pareto]] || <math>f(x) = \frac{\alpha x_m^\alpha}{x^{\alpha+1}}</math> || <math>\ln \frac{x_m}{\alpha} + 1 + \frac{1}{\alpha}</math>||<math>[x_m,\infty)\,</math>
      
| Pareto || <math>f(x) = \frac{\alpha x_m^\alpha}{x^{\alpha+1}}</math> || <math>\ln \frac{x_m}{\alpha} + 1 + \frac{1}{\alpha}</math>||<math>[x_m,\infty)\,</math>
 
| Pareto || <math>f(x) = \frac{\alpha x_m^\alpha}{x^{\alpha+1}}</math> || <math>\ln \frac{x_m}{\alpha} + 1 + \frac{1}{\alpha}</math>||<math>[x_m,\infty)\,</math>
   −
| Pareto | math f (x) frac { alpha ^ alpha + 1} / math | math | ln frac { x m } + 1 + frac {1} / math | math [ x m,infty ] ,/ math
+
| Pareto | < math > f (x) = frac { alpha x _ m ^ alpha }{ x ^ { alpha + 1}} </math > | < math > ln frac { x _ m }{ alpha } + 1 + frac {1}{ alpha } </math > | < math > [ x _ m,infty ] ,</math >
   −
|-
+
|
    
|-
 
|-
第951行: 第559行:  
|-
 
|-
   −
| [[Student's t-distribution|Student's t]] || <math>f(x) = \frac{(1 + x^2/\nu)^{-\frac{\nu+1}{2}}}{\sqrt{\nu}B(\frac{1}{2},\frac{\nu}{2})}</math> || <math>\frac{\nu\!+\!1}{2}\left(\psi\left(\frac{\nu\!+\!1}{2}\right)\!-\!\psi\left(\frac{\nu}{2}\right)\right)\!+\!\ln \sqrt{\nu} B\left(\frac{1}{2},\frac{\nu}{2}\right)</math>||<math>(-\infty,\infty)\,</math>
+
| <math>= -\left(\int_0^\infty (\log \lambda)\lambda e^{-\lambda x}\,dx + \int_0^\infty (-\lambda x) \lambda e^{-\lambda x}\,dx\right) </math>
    
| Student's t || <math>f(x) = \frac{(1 + x^2/\nu)^{-\frac{\nu+1}{2}}}{\sqrt{\nu}B(\frac{1}{2},\frac{\nu}{2})}</math> || <math>\frac{\nu\!+\!1}{2}\left(\psi\left(\frac{\nu\!+\!1}{2}\right)\!-\!\psi\left(\frac{\nu}{2}\right)\right)\!+\!\ln \sqrt{\nu} B\left(\frac{1}{2},\frac{\nu}{2}\right)</math>||<math>(-\infty,\infty)\,</math>
 
| Student's t || <math>f(x) = \frac{(1 + x^2/\nu)^{-\frac{\nu+1}{2}}}{\sqrt{\nu}B(\frac{1}{2},\frac{\nu}{2})}</math> || <math>\frac{\nu\!+\!1}{2}\left(\psi\left(\frac{\nu\!+\!1}{2}\right)\!-\!\psi\left(\frac{\nu}{2}\right)\right)\!+\!\ln \sqrt{\nu} B\left(\frac{1}{2},\frac{\nu}{2}\right)</math>||<math>(-\infty,\infty)\,</math>
   −
句子太长,请短一点
+
| Student’s t | < math > f (x) = frac {(1 + x ^ 2/nu) ^ {-frac { nu + 1}{2}}{{ sqrt { nu } b (frac {1}{2} ,frac { nu }{2})} </math | | | < math > frac { nu! + ! 1}{2}右) !-! 左(psi (frac { nu! + 1}{2}右) !-! 左(frac { nu }{2右) ! + ! { nu }{ n 左(frac {2}右)
    
|-
 
|-
第963行: 第571行:  
|-
 
|-
   −
| [[Triangular distribution|Triangular]] || <math> f(x) = \begin{cases}
+
|
    
| Triangular || <math> f(x) = \begin{cases}
 
| Triangular || <math> f(x) = \begin{cases}
   −
| 三角形 | | math f (x) begin { cases }
+
| 三角形 | | < math > f (x) = begin { cases }
   −
\frac{2(x-a)}{(b-a)(c-a)} & \mathrm{for\ } a \le x \leq c, \\[4pt]
+
| <math>= -\log \lambda \int_0^\infty f(x)\,dx + \lambda E[X]</math>
    
\frac{2(x-a)}{(b-a)(c-a)} & \mathrm{for\ } a \le x \leq c, \\[4pt]
 
\frac{2(x-a)}{(b-a)(c-a)} & \mathrm{for\ } a \le x \leq c, \\[4pt]
   −
[2(x-a)}{(b-a)(c-a)} & [4 pt ]
+
Frac {2(x-a)}{(b-a)(c-a)} & mathrm { for } a le x leq c,[4 pt ]
   −
    \frac{2(b-x)}{(b-a)(b-c)} & \mathrm{for\ } c < x \le b, \\[4pt]
+
|-
    
     \frac{2(b-x)}{(b-a)(b-c)} & \mathrm{for\ } c < x \le b, \\[4pt]
 
     \frac{2(b-x)}{(b-a)(b-c)} & \mathrm{for\ } c < x \le b, \\[4pt]
   −
2(b-x)}{(b-a)(b-c)} & (4 pt)
+
Frac {2(b-x)}{(b-a)(b-c)} & mathrm { for } c < x le b,[4 pt ]
   −
\end{cases}</math> || <math>\frac{1}{2} + \ln \frac{b-a}{2}</math>||<math>[0,1]\,</math>
+
|
    
  \end{cases}</math> || <math>\frac{1}{2} + \ln \frac{b-a}{2}</math>||<math>[0,1]\,</math>
 
  \end{cases}</math> || <math>\frac{1}{2} + \ln \frac{b-a}{2}</math>||<math>[0,1]\,</math>
   −
数学 | | math frac {1} + ln frac { b-a }{2} / math | math [0,1] / math
+
结束{ cases } </math > | | < math > frac {1}{2} + ln frac { b-a }{2} </math > | < math > [0,1] ,</math >
   −
|-
+
| <math>= -\log\lambda + 1\,.</math>
    
|-
 
|-
第993行: 第601行:  
|-
 
|-
   −
| [[Weibull distribution|Weibull]] || <math>f(x) = \frac{k}{\lambda^k} x^{k-1} \exp\left(-\frac{x^k}{\lambda^k}\right)</math> || <math>\frac{(k-1)\gamma_E}{k} + \ln \frac{\lambda}{k} + 1</math>||<math>[0,\infty)\,</math>
+
|}
    
| Weibull || <math>f(x) = \frac{k}{\lambda^k} x^{k-1} \exp\left(-\frac{x^k}{\lambda^k}\right)</math> || <math>\frac{(k-1)\gamma_E}{k} + \ln \frac{\lambda}{k} + 1</math>||<math>[0,\infty)\,</math>
 
| Weibull || <math>f(x) = \frac{k}{\lambda^k} x^{k-1} \exp\left(-\frac{x^k}{\lambda^k}\right)</math> || <math>\frac{(k-1)\gamma_E}{k} + \ln \frac{\lambda}{k} + 1</math>||<math>[0,\infty)\,</math>
   −
| Weibull | | math f (x) frac { k }{ λ ^ k-1} exp left (- frac { x ^ k }{ λ ^ k }) / math | math frac {(k-1) e } + ln frac { k } + 1 / math | math | math [0,infty) ,/ math
+
| Weibull | | < math > f (x) = frac { k }{ lambda ^ k } x ^ { k-1} exp left (- frac { x ^ k }{ lambda ^ k } right) </math > | < math > | < math > frac {(k-1) gamma _ e }{ k } + ln frac { lambda }{ k } + 1 </math > | < math > [0,infty) ,</math >
 +
 
   −
|-
      
|-
 
|-
第1,005行: 第613行:  
|-
 
|-
   −
| [[Multivariate normal distribution|Multivariate normal]] || <math>
+
Here, <math>h_e(X)</math> was used rather than <math>h(X)</math> to make it explicit that the logarithm was taken to base ''e'', to simplify the calculation.
    
| Multivariate normal || <math>
 
| Multivariate normal || <math>
   −
多元正态分布 | 数学
+
多元正态 | | < 数学 >
 +
 
   −
f_X(\vec{x}) =</math><br /><math> \frac{\exp \left( -\frac{1}{2} ( \vec{x} - \vec{\mu})^\top \Sigma^{-1}\cdot(\vec{x} - \vec{\mu}) \right)} {(2\pi)^{N/2} \left|\Sigma\right|^{1/2}}</math> || <math>\frac{1}{2}\ln\{(2\pi e)^{N} \det(\Sigma)\}</math>||<math>\mathbb{R}^N</math>
      
f_X(\vec{x}) =</math><br /><math> \frac{\exp \left( -\frac{1}{2} ( \vec{x} - \vec{\mu})^\top \Sigma^{-1}\cdot(\vec{x} - \vec{\mu}) \right)} {(2\pi)^{N/2} \left|\Sigma\right|^{1/2}}</math> || <math>\frac{1}{2}\ln\{(2\pi e)^{N} \det(\Sigma)\}</math>||<math>\mathbb{R}^N</math>
 
f_X(\vec{x}) =</math><br /><math> \frac{\exp \left( -\frac{1}{2} ( \vec{x} - \vec{\mu})^\top \Sigma^{-1}\cdot(\vec{x} - \vec{\mu}) \right)} {(2\pi)^{N/2} \left|\Sigma\right|^{1/2}}</math> || <math>\frac{1}{2}\ln\{(2\pi e)^{N} \det(\Sigma)\}</math>||<math>\mathbb{R}^N</math>
   −
F x (x) / math br / math frac { exp left (- frac {1}(- frac {1}-vec { mu }) ^ top Sigma ^ (- 1)-dot (- 1)-vec mu)) ^ {(2 pi) ^ { n / 2}左 | Sigma | right | | | | math | | c {1 / 2}(2 pi) | | | | | | | | | det {1 | ln (2 pi) | n (Sigma) / math | bb | n | n | n | n | n | ^
+
F _ x (vec { x }) = </math > < br/> < math > frac { exp left (- frac {1}{2}(vec { x }-vec { mu }) ^ top Sigma ^ {-1} cdot (vec { x }-vec { mu }) right)}{(2 pi) ^ { N/2}左 Sigma | right | ^ {1/2} < | < math > | < < | < math > frac {1}{ ln (2 pi e){{ n } | math < | | | | > 数学 < bb >
   −
|-
+
==Relation to estimator error==
    
|-
 
|-
第1,023行: 第631行:  
|-
 
|-
   −
|}
+
The differential entropy yields a lower bound on the expected squared error of an [[estimator]]. For any random variable <math>X</math> and estimator <math>\widehat{X}</math> the following holds:<ref name="cover_thomas" />
    
|}
 
|}
第1,029行: 第637行:  
|}
 
|}
    +
:<math>\operatorname{E}[(X - \widehat{X})^2] \ge \frac{1}{2\pi e}e^{2h(X)}</math>
   −
 
+
with equality if and only if <math>X</math> is a Gaussian random variable and <math>\widehat{X}</math> is the mean of <math>X</math>.
 
  −
 
  −
Many of the differential entropies are from.<ref name="lazorathie">{{cite journal|author=Lazo, A. and P. Rathie|title=On the entropy of continuous probability distributions|journal=IEEE Transactions on Information Theory|year=1978|volume=24 |issue=1|doi=10.1109/TIT.1978.1055832|pages=120–122}}</ref>{{rp|120–122}}
      
Many of the differential entropies are from.
 
Many of the differential entropies are from.
第1,041行: 第647行:       −
 
+
==Differential entropies for various distributions==
 
  −
==Variants==
  −
 
  −
==Variants==
  −
 
  −
变体
  −
 
  −
As described above, differential entropy does not share all properties of discrete entropy. For example, the differential entropy can be negative; also it is not invariant under continuous coordinate transformations. [[Edwin Thompson Jaynes]] showed in fact  that the expression above is not the correct limit of the expression for a finite set of probabilities.<ref>{{cite journal |author=Jaynes, E.T. |authorlink=Edwin Thompson Jaynes |title=Information Theory And Statistical Mechanics |journal=Brandeis University Summer Institute Lectures in Theoretical Physics |volume=3 |issue=sect. 4b |year=1963 |url=http://bayes.wustl.edu/etj/articles/brandeis.pdf |format=PDF}}</ref>{{rp|181–218}}
  −
 
  −
As described above, differential entropy does not share all properties of discrete entropy. For example, the differential entropy can be negative; also it is not invariant under continuous coordinate transformations. Edwin Thompson Jaynes showed in fact  that the expression above is not the correct limit of the expression for a finite set of probabilities.
  −
 
  −
如上所述,微分熵并不具有离散熵的所有属性。例如,微分熵可以是负的,也不是连续坐标变换下的不变量。事实上,埃德温·汤普森·杰尼斯表明上面的表达式并不是一组有限概率表达式的正确极限。
  −
 
  −
 
  −
 
  −
 
  −
 
  −
A modification of differential entropy adds an [[invariant measure]] factor to correct this, (see [[limiting density of discrete points]]). If <math>m(x)</math> is further constrained to be a probability density, the resulting notion is called [[relative entropy]] in information theory:
  −
 
  −
A modification of differential entropy adds an invariant measure factor to correct this, (see limiting density of discrete points). If <math>m(x)</math> is further constrained to be a probability density, the resulting notion is called relative entropy in information theory:
  −
 
  −
一个修改的微分熵增加了一个不变测度因子来纠正这个错误。如果数学 m (x) / math 进一步被限定为概率密度,那么在信息论中,由此产生的概念被称为相对熵:
  −
 
  −
 
  −
 
  −
 
  −
 
  −
:<math>D(p||m) = \int p(x)\log\frac{p(x)}{m(x)}\,dx.</math>
  −
 
  −
<math>D(p||m) = \int p(x)\log\frac{p(x)}{m(x)}\,dx.</math>
  −
 
  −
数学 d (p | | m) int p (x) log  frac { p (x)}{ m (x)} ,dx. / math
  −
 
  −
 
  −
 
  −
 
  −
 
  −
The definition of differential entropy above can be obtained by partitioning the range of <math>X</math> into bins of length <math>h</math> with associated sample points <math>ih</math> within the bins, for <math>X</math> Riemann integrable. This gives a [[Quantization (signal processing)|quantized]] version of <math>X</math>, defined by <math>X_h = ih</math> if <math>ih \le X \le (i+1)h</math>. Then the entropy of <math>X_h = ih</math> is<ref name="cover_thomas"/>
  −
 
  −
The definition of differential entropy above can be obtained by partitioning the range of <math>X</math> into bins of length <math>h</math> with associated sample points <math>ih</math> within the bins, for <math>X</math> Riemann integrable. This gives a quantized version of <math>X</math>, defined by <math>X_h = ih</math> if <math>ih \le X \le (i+1)h</math>. Then the entropy of <math>X_h = ih</math> is
  −
 
  −
上述微分熵的定义可以通过将数学 x / math 的范围划分到数学 h / math 的容器中,在容器中放入相关的样本点 math ih / math,用于数学 x / math Riemann 可积。这给出了数学 x / math 的量化版本,定义为 math x h ih / math if math ih  le x  le (i + 1) h / math。然后数学的熵 x h ih / math 是
  −
 
  −
 
  −
 
  −
 
  −
 
  −
:<math>H_h=-\sum_i hf(ih)\log (f(ih)) - \sum hf(ih)\log(h).</math>
      
<math>H_h=-\sum_i hf(ih)\log (f(ih)) - \sum hf(ih)\log(h).</math>
 
<math>H_h=-\sum_i hf(ih)\log (f(ih)) - \sum hf(ih)\log(h).</math>
   −
数学 h- sum i hf (ih) log (f (ih))- sum hf (ih) log (h) . / math
+
[数学] h =-sum _ i hf (ih) log (f (ih)-sum hf (ih) log (h)  
    +
In the table below <math>\Gamma(x) = \int_0^{\infty} e^{-t} t^{x-1} dt</math> is the [[gamma function]], <math>\psi(x) = \frac{d}{dx} \ln\Gamma(x)=\frac{\Gamma'(x)}{\Gamma(x)}</math> is the [[digamma function]], <math>B(p,q) = \frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}</math> is the [[beta function]], and γ<sub>''E''</sub> is [[Euler-Mascheroni constant|Euler's constant]].<ref>{{cite journal |last1=Park |first1=Sung Y. |last2=Bera |first2=Anil K. |year=2009 |title=Maximum entropy autoregressive conditional heteroskedasticity model |journal=Journal of Econometrics |publisher=Elsevier |url=http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf |accessdate=2011-06-02 |archive-url=https://web.archive.org/web/20160307144515/http://wise.xmu.edu.cn/uploadfiles/paper-masterdownload/2009519932327055475115776.pdf |archive-date=2016-03-07 |url-status=dead }}</ref>{{rp|219–230}}
   −
 
+
{| class="wikitable" style="background:white"
 
  −
 
  −
The first term on the right approximates the differential entropy, while the second term is approximately <math>-\log(h)</math>. Note that this procedure suggests that the entropy in the discrete sense of a [[continuous random variable]] should be <math>\infty</math>.
      
The first term on the right approximates the differential entropy, while the second term is approximately <math>-\log(h)</math>. Note that this procedure suggests that the entropy in the discrete sense of a continuous random variable should be <math>\infty</math>.
 
The first term on the right approximates the differential entropy, while the second term is approximately <math>-\log(h)</math>. Note that this procedure suggests that the entropy in the discrete sense of a continuous random variable should be <math>\infty</math>.
   −
右边的第一项近似于微分熵,而第二项近似于 math- log (h) / math。请注意,这个过程表明,连续随机变量的离散意义上的熵应该是数学 infty / math。
+
右边的第一个术语近似于微分熵,而第二个术语近似于 math >-log (h) </math > 。请注意,这个过程表明,连续随机变量的离散意义上的熵应该是“数学”。
    +
|+ Table of differential entropies
    +
|-
    +
! Distribution Name !! Probability density function (pdf) !! Entropy in [[Nat (unit)|nat]]s || Support
    +
|-
   −
==See also==
+
| [[Uniform distribution (continuous)|Uniform]] || <math>f(x) = \frac{1}{b-a}</math> || <math>\ln(b - a) \,</math> ||<math>[a,b]\,</math>
   −
==See also==
+
|-
   −
参见
+
| [[Normal distribution|Normal]] || <math>f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)</math> || <math>\ln\left(\sigma\sqrt{2\,\pi\,e}\right) </math>||<math>(-\infty,\infty)\,</math>
   −
*[[Information entropy]]
+
|-
    +
| [[Exponential distribution|Exponential]] || <math>f(x) = \lambda \exp\left(-\lambda x\right)</math> || <math>1 - \ln \lambda \, </math>||<math>[0,\infty)\,</math>
    +
|-
   −
*[[Self-information]]
+
| [[Rayleigh distribution|Rayleigh]] || <math>f(x) = \frac{x}{\sigma^2} \exp\left(-\frac{x^2}{2\sigma^2}\right)</math> || <math>1 + \ln \frac{\sigma}{\sqrt{2}} + \frac{\gamma_E}{2}</math>||<math>[0,\infty)\,</math>
    +
|-
    +
| [[Beta distribution|Beta]] || <math>f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}</math> for <math>0 \leq x \leq 1</math> || <math> \ln B(\alpha,\beta) - (\alpha-1)[\psi(\alpha) - \psi(\alpha +\beta)]\,</math><br /><math>- (\beta-1)[\psi(\beta) - \psi(\alpha + \beta)] \, </math>||<math>[0,1]\,</math>
   −
*[[Entropy estimation]]
+
|-
 
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
  −
==References==
  −
 
  −
==References==
  −
 
  −
参考资料
  −
 
  −
{{reflist}}
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
  −
==External links==
  −
 
  −
==External links==
  −
 
  −
外部链接
  −
 
  −
* {{springer|title=Differential entropy|id=p/d031890}}
  −
 
  −
 
  −
 
  −
* {{planetmath reference|id=1915|title=Differential entropy}}
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
  −
 
  −
[[Category:Entropy and information]]
      
Category:Entropy and information
 
Category:Entropy and information
第1,169行: 第693行:  
类别: 熵和信息
 
类别: 熵和信息
   −
[[Category:Information theory]]
+
| [[Cauchy distribution|Cauchy]] || <math>f(x) = \frac{\gamma}{\pi} \frac{1}{\gamma^2 + x^2}</math> || <math>\ln(4\pi\gamma) \, </math>||<math>(-\infty,\infty)\,</math>
    
Category:Information theory
 
Category:Information theory
第1,175行: 第699行:  
范畴: 信息论
 
范畴: 信息论
   −
[[Category:Statistical randomness]]
+
|-
    
Category:Statistical randomness
 
Category:Statistical randomness
1,592

个编辑

导航菜单