第11行: |
第11行: |
| Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. | | Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. |
| | | |
− | 微分熵(也称为连续熵)是信息论中的一个概念,最初由香农尝试将(香农)熵的概念扩展到连续的概率分布,香农熵是衡量一个随机变量的平均惊人程度的指标。不幸的是,香农没有推导出这个公式,而只是假设它是离散熵的正确连续模拟,但事实上它不是。
| + | <font color="#ff8000"> 微分熵Differential entropy</font>(也称为连续熵)是信息论中的一个概念,最初由香农尝试将(香农)熵的概念扩展到连续的概率分布,香农熵是衡量一个随机变量的平均惊人程度的指标。不幸的是,香农没有推导出这个公式,而只是假设它是离散熵的正确连续模拟,但事实上它不是。 |
| | | |
| | | |
第383行: |
第383行: |
| The differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable <math>X</math> and estimator <math>\widehat{X}</math> the following holds: | | The differential entropy yields a lower bound on the expected squared error of an estimator. For any random variable <math>X</math> and estimator <math>\widehat{X}</math> the following holds: |
| | | |
− | 对于估计量的预期平方误差,微分熵产生一个下限。对于任何随机变量x和估计量 下面的值:
| + | 对于估计量的预期平方误差,微分熵产生一个下限。对于任何随机变量x和估计量Xˆ 下面的值: |
| | | |
| | | |
第392行: |
第392行: |
| | | |
| ===Alternative proof=== | | ===Alternative proof=== |
− | | + | 替代证明 |
| with equality if and only if <math>X</math> is a Gaussian random variable and <math>\widehat{X}</math> is the mean of <math>X</math>. | | with equality if and only if <math>X</math> is a Gaussian random variable and <math>\widehat{X}</math> is the mean of <math>X</math>. |
| | | |
− | 当且仅当 x是一个 Gaussian 随机变量,而 < math > x } </math > 是 < math > x </math > 的平均值。
| + | 当且仅当x是一个 Gaussian 随机变量,而x 是Xˆ 的平均值。 |
| | | |
| This result may also be demonstrated using the [[variational calculus]]. A Lagrangian function with two [[Lagrangian multiplier]]s may be defined as: | | This result may also be demonstrated using the [[variational calculus]]. A Lagrangian function with two [[Lagrangian multiplier]]s may be defined as: |
第486行: |
第486行: |
| | | |
| ==Example: Exponential distribution== | | ==Example: Exponential distribution== |
− | | + | 例子:指数分布 |
| | Laplace || <math>f(x) = \frac{1}{2b} \exp\left(-\frac{|x - \mu|}{b}\right)</math> || <math>1 + \ln(2b) \, </math>||<math>(-\infty,\infty)\,</math> | | | Laplace || <math>f(x) = \frac{1}{2b} \exp\left(-\frac{|x - \mu|}{b}\right)</math> || <math>1 + \ln(2b) \, </math>||<math>(-\infty,\infty)\,</math> |
| | | |
第624行: |
第624行: |
| | | |
| ==Relation to estimator error== | | ==Relation to estimator error== |
− | | + | 与估计量误差的联系 |
| |- | | |- |
| | | |
第657行: |
第657行: |
| The first term on the right approximates the differential entropy, while the second term is approximately <math>-\log(h)</math>. Note that this procedure suggests that the entropy in the discrete sense of a continuous random variable should be <math>\infty</math>. | | The first term on the right approximates the differential entropy, while the second term is approximately <math>-\log(h)</math>. Note that this procedure suggests that the entropy in the discrete sense of a continuous random variable should be <math>\infty</math>. |
| | | |
− | 右边的第一个术语近似于微分熵,而第二个术语近似于 math >-log (h) </math > 。请注意,这个过程表明,连续随机变量的离散意义上的熵应该是“数学”。
| + | 右边的第一个术语近似于微分熵,而第二个术语近似于log(h)。请注意,这个过程表明,连续随机变量的离散意义上的熵应该是“无穷”。 |
| | | |
| |+ Table of differential entropies | | |+ Table of differential entropies |