更改

跳到导航 跳到搜索
删除475字节 、 2021年1月29日 (五) 07:05
第59行: 第59行:     
::<math>h(X_1, \ldots, X_n) = \sum_{i=1}^{n} h(X_i|X_1, \ldots, X_{i-1}) \leq \sum_{i=1}^{n} h(X_i)</math>.
 
::<math>h(X_1, \ldots, X_n) = \sum_{i=1}^{n} h(X_i|X_1, \ldots, X_{i-1}) \leq \sum_{i=1}^{n} h(X_i)</math>.
  −
<math>L=\int_{-\infty}^\infty g(x)\ln(g(x))\,dx-\lambda_0\left(1-\int_{-\infty}^\infty g(x)\,dx\right)-\lambda\left(\sigma^2-\int_{-\infty}^\infty g(x)(x-\mu)^2\,dx\right)</math>
      
* Differential entropy is translation invariant, i.e. for a constant <math>c</math>.<ref name="cover_thomas" />{{rp|253}}
 
* Differential entropy is translation invariant, i.e. for a constant <math>c</math>.<ref name="cover_thomas" />{{rp|253}}
第80行: 第78行:     
* In general, for a transformation from a random vector to another random vector with same dimension <math>\mathbf{Y}=m \left(\mathbf{X}\right)</math>, the corresponding entropies are related via
 
* In general, for a transformation from a random vector to another random vector with same dimension <math>\mathbf{Y}=m \left(\mathbf{X}\right)</math>, the corresponding entropies are related via
  −
<math>g(x)=e^{-\lambda_0-1-\lambda(x-\mu)^2}</math>
      
::<math>h(\mathbf{Y}) \leq h(\mathbf{X}) + \int f(x) \log \left\vert \frac{\partial m}{\partial x} \right\vert dx</math>
 
::<math>h(\mathbf{Y}) \leq h(\mathbf{X}) + \int f(x) \log \left\vert \frac{\partial m}{\partial x} \right\vert dx</math>
第93行: 第89行:  
* It can be negative.
 
* It can be negative.
 
它可以为负
 
它可以为负
Let <math>X</math> be an exponentially distributed random variable with parameter <math>\lambda</math>, that is, with probability density function
  −
  −
设 x 是一个指数分布的随机变量,它的参数是 λ,也就是概率密度函数
      
A modification of differential entropy that addresses these drawbacks is the '''relative information entropy''', also known as the Kullback–Leibler divergence, which includes an [[invariant measure]] factor (see [[limiting density of discrete points]]).
 
A modification of differential entropy that addresses these drawbacks is the '''relative information entropy''', also known as the Kullback–Leibler divergence, which includes an [[invariant measure]] factor (see [[limiting density of discrete points]]).
7,129

个编辑

导航菜单