更改

跳到导航 跳到搜索
添加2字节 、 2020年8月15日 (六) 17:21
第120行: 第120行:  
Intuitively, mutual information measures the information that <math>X</math> and <math>Y</math> share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if <math>X</math> and <math>Y</math> are independent, then knowing <math>X</math> does not give any information about <math>Y</math> and vice versa, so their mutual information is zero.  At the other extreme, if <math>X</math> is a deterministic function of <math>Y</math> and <math>Y</math> is a deterministic function of <math>X</math> then all information conveyed by <math>X</math> is shared with <math>Y</math>: knowing <math>X</math> determines the value of <math>Y</math> and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in <math>Y</math> (or <math>X</math>) alone, namely the entropy of <math>Y</math> (or <math>X</math>). Moreover, this mutual information is the same as the entropy of <math>X</math> and as the entropy of <math>Y</math>. (A very special case of this is when <math>X</math> and <math>Y</math> are the same random variable.)
 
Intuitively, mutual information measures the information that <math>X</math> and <math>Y</math> share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if <math>X</math> and <math>Y</math> are independent, then knowing <math>X</math> does not give any information about <math>Y</math> and vice versa, so their mutual information is zero.  At the other extreme, if <math>X</math> is a deterministic function of <math>Y</math> and <math>Y</math> is a deterministic function of <math>X</math> then all information conveyed by <math>X</math> is shared with <math>Y</math>: knowing <math>X</math> determines the value of <math>Y</math> and vice versa. As a result, in this case the mutual information is the same as the uncertainty contained in <math>Y</math> (or <math>X</math>) alone, namely the entropy of <math>Y</math> (or <math>X</math>). Moreover, this mutual information is the same as the entropy of <math>X</math> and as the entropy of <math>Y</math>. (A very special case of this is when <math>X</math> and <math>Y</math> are the same random variable.)
   −
直观地说,相互信息衡量了<math>X</math>和<math>Y</math>共享的信息:它衡量了当已知其中一个变量后可以减少另一个变量多少的不确定性。例如,若<math>X</math>和<math>Y</math>是相互独立的,那么已知<math>X</math>不会得到关于<math>Y</math>的任何信息,反之亦然,因此它们之间的互信息为零。而另一种极端情况就是,若<math>X</math>是<math>Y</math>的'''<font color="#32CD32">确定函数</font>''',而<math>X</math>也是<math>X</math>自身的确定函数,则<math>X</math>传递的所有信息都与<math>Y</math>共享:即已知<math>X</math>就可以知道<math>Y</math>的值,反之亦然。因此,在这种情况下,互信息与仅包含在<math>Y</math>(或<math>X</math>)中的不确定性相同,即<math>Y</math>(或<math>X</math>)的熵相同。此外,这种情况下互信息与<math>X</math>的熵和<math>Y</math>的熵相同。(一个非常特殊的情况是当<math>X</math>和<math>Y</math>是相同的随机变量。)
+
直观地说,相互信息衡量了<math>X</math> 和 <math>Y</math>共享的信息:它衡量了当已知其中一个变量后可以减少另一个变量多少的不确定性。例如,若<math>X</math>和<math>Y</math>是相互独立的,那么已知<math>X</math>不会得到关于<math>Y</math>的任何信息,反之亦然,因此它们之间的互信息为零。而另一种极端情况就是,若<math>X</math>是<math>Y</math>的'''<font color="#32CD32">确定函数</font>''',而<math>X</math>也是<math>X</math>自身的确定函数,则<math>X</math>传递的所有信息都与<math>Y</math>共享:即已知<math>X</math>就可以知道<math>Y</math>的值,反之亦然。因此,在这种情况下,互信息与仅包含在<math>Y</math>(或<math>X</math>)中的不确定性相同,即<math>Y</math>(或<math>X</math>)的熵相同。此外,这种情况下互信息与<math>X</math>的熵和<math>Y</math>的熵相同。(一个非常特殊的情况是当<math>X</math>和<math>Y</math>是相同的随机变量。)
     
463

个编辑

导航菜单