更改

添加12字节 、 2020年9月22日 (二) 11:05
第29行: 第29行:       −
==Information theory==
+
==Information theory 信息论==
    
In [[information theory]] and [[statistics]], negentropy is used as a measure of distance to normality.<ref>Aapo Hyvärinen, [http://www.cis.hut.fi/aapo/papers/NCS99web/node32.html Survey on Independent Component Analysis, node32: Negentropy], Helsinki University of Technology Laboratory of Computer and Information Science</ref><ref>Aapo Hyvärinen and Erkki Oja, [http://www.cis.hut.fi/aapo/papers/IJCNN99_tutorialweb/node14.html Independent Component Analysis: A Tutorial, node14: Negentropy], Helsinki University of Technology Laboratory of Computer and Information Science</ref><ref>Ruye Wang, [http://fourier.eng.hmc.edu/e161/lectures/ica/node4.html Independent Component Analysis, node4: Measures of Non-Gaussianity]</ref> Out of all [[Distribution (mathematics)|distributions]] with a given mean and variance, the normal or [[Gaussian distribution]] is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes [[if and only if]] the signal is Gaussian.
 
In [[information theory]] and [[statistics]], negentropy is used as a measure of distance to normality.<ref>Aapo Hyvärinen, [http://www.cis.hut.fi/aapo/papers/NCS99web/node32.html Survey on Independent Component Analysis, node32: Negentropy], Helsinki University of Technology Laboratory of Computer and Information Science</ref><ref>Aapo Hyvärinen and Erkki Oja, [http://www.cis.hut.fi/aapo/papers/IJCNN99_tutorialweb/node14.html Independent Component Analysis: A Tutorial, node14: Negentropy], Helsinki University of Technology Laboratory of Computer and Information Science</ref><ref>Ruye Wang, [http://fourier.eng.hmc.edu/e161/lectures/ica/node4.html Independent Component Analysis, node4: Measures of Non-Gaussianity]</ref> Out of all [[Distribution (mathematics)|distributions]] with a given mean and variance, the normal or [[Gaussian distribution]] is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes [[if and only if]] the signal is Gaussian.
第35行: 第35行:  
In information theory and statistics, negentropy is used as a measure of distance to normality. Out of all distributions with a given mean and variance, the normal or Gaussian distribution is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes if and only if the signal is Gaussian.
 
In information theory and statistics, negentropy is used as a measure of distance to normality. Out of all distributions with a given mean and variance, the normal or Gaussian distribution is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes if and only if the signal is Gaussian.
   −
在信息论和统计学中,负熵被用来度量到正态的距离。在所有具有给定均值和方差的分布中,正态分布或正态分布分布的熵最大。负熵用相同的均值和方差来度量给定分布和正态分布之间的熵差。因此,负熵总是非负的,是不变的任何线性可逆变化的坐标,并消失的当且仅当信号是高斯。
+
在信息论和统计学中,负熵被用来度量到正态分布的距离。在所有具有给定均值和方差的分布中,正态分布或高斯分布的熵最大。负熵用来度量具有相同均值和方差的给定分布和正态分布之间熵的差距。因此,负熵总是非负的,在任何线性可逆的坐标变换下都是不变的,当且仅当信号是高斯分布时才变为零。
      第50行: 第50行:     
<math>J(p_x) = S(\varphi_x) - S(p_x)\,</math>
 
<math>J(p_x) = S(\varphi_x) - S(p_x)\,</math>
  −
数学 j (p x) s ( varphi x)-s (p x) ,/ math
        第59行: 第57行:  
where <math>S(\varphi_x)</math> is the differential entropy of the Gaussian density with the same mean and variance as <math>p_x</math> and <math>S(p_x)</math> is the differential entropy of <math>p_x</math>:
 
where <math>S(\varphi_x)</math> is the differential entropy of the Gaussian density with the same mean and variance as <math>p_x</math> and <math>S(p_x)</math> is the differential entropy of <math>p_x</math>:
   −
其中 math s ( varphi x) / math 是 Gaussian 密度的微分熵,其均值和方差与 math p x / math math s (px) / math math p x / math 的微分熵:
+
其中<math>S(\varphi_x)</math>表示与<math>p_x</math>具有相同均值和方差的高斯密度的微分熵,<math>S(p_x)</math>表示<math>p_x</math>的微分熵:
      第75行: 第73行:  
Negentropy is used in statistics and signal processing. It is related to network entropy, which is used in independent component analysis.
 
Negentropy is used in statistics and signal processing. It is related to network entropy, which is used in independent component analysis.
   −
负熵用于统计和信号处理。它与网络熵有关,而网络熵被用于独立元素分析。
+
负熵通常用于统计和信号处理。它与网络熵有关,网络熵被用于独立成分分析。
      第83行: 第81行:  
The negentropy of a distribution is equal to the Kullback–Leibler divergence between <math>p_x</math> and a Gaussian distribution with the same mean and variance as <math>p_x</math> (see  Differential entropy#Maximization in the normal distribution for a proof). In particular, it is always nonnegative.
 
The negentropy of a distribution is equal to the Kullback–Leibler divergence between <math>p_x</math> and a Gaussian distribution with the same mean and variance as <math>p_x</math> (see  Differential entropy#Maximization in the normal distribution for a proof). In particular, it is always nonnegative.
   −
分布的负熵等于数学 p x / math 和数学 p x / math 的均值和方差相同的正态分布的 Kullback-Leibler 散度(参见正态分布的微分熵 # 最大化)。特别是,它总是非负的。
+
一个分布的负熵等于 <math>p_x</math> 和与 <math>p_x</math> 具有相同均值和方差的正态分布的 Kullback-Leibler 散度(参见正态分布的微分熵和最大化)。特别是,它总是非负的。
     
320

个编辑