− | Intuitively, if entropy <math>H(Y)</math> is regarded as a measure of uncertainty about a random variable, then <math>Ha(Y|X)</math> is a measure of what <math>X</math> does ''not'' say about <math>Y</math>. This is "the amount of uncertainty remaining about <math>Y</math> after <math>X</math> is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty in <math>Y</math>, minus the amount of uncertainty in <math>Y</math> which remains after <math>X</math> is known", which is equivalent to "the amount of uncertainty in <math>Y</math> which is removed by knowing <math>X</math>". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other. | + | Intuitively, if entropy <math>H(Y)</math> is regarded as a measure of uncertainty about a random variable, then <math>H(Y|X)</math> is a measure of what <math>X</math> does ''not'' say about <math>Y</math>. This is "the amount of uncertainty remaining about <math>Y</math> after <math>X</math> is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty in <math>Y</math>, minus the amount of uncertainty in <math>Y</math> which remains after <math>X</math> is known", which is equivalent to "the amount of uncertainty in <math>Y</math> which is removed by knowing <math>X</math>". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other. |
| + | Intuitively, if entropy 𝐻(𝑌) is regarded as a measure of uncertainty about a random variable, then 𝐻(𝑌|𝑋) is a measure of what 𝑋 does not say about 𝑌. This is "the amount of uncertainty remaining about 𝑌 after 𝑋 is known", and thus the right side of the second of these equalities can be read as "the amount of uncertainty in 𝑌, minus the amount of uncertainty in 𝑌 which remains after 𝑋 is known", which is equivalent to "the amount of uncertainty in 𝑌 which is removed by knowing 𝑋". This corroborates the intuitive meaning of mutual information as the amount of information (that is, reduction in uncertainty) that knowing either variable provides about the other. |
| 如果熵<math>H(Y)</math>被看作是对一个随机变量的不确定性的度量,那么<math>H(Y|X)</math>是对 <math>X</math>没有对<math>Y</math>进行说明的度量。这是“<math>X</math>已知后<math>Y</math>剩余的不确定性量” ,因此,第二个等式的右边可以被解读为“数学 y / 数学中的不确定性量,减去数学 y / 数学中的不确定性量,在数学 x / 数学已知后仍然存在的不确定性量” ,这相当于“数学 y / 数学中的不确定性量,通过知道数学 x / 数学而去除”。这证实了互信息的直观含义,即知道任何一个变量提供的关于另一个变量的信息量(即不确定性的减少)。 | | 如果熵<math>H(Y)</math>被看作是对一个随机变量的不确定性的度量,那么<math>H(Y|X)</math>是对 <math>X</math>没有对<math>Y</math>进行说明的度量。这是“<math>X</math>已知后<math>Y</math>剩余的不确定性量” ,因此,第二个等式的右边可以被解读为“数学 y / 数学中的不确定性量,减去数学 y / 数学中的不确定性量,在数学 x / 数学已知后仍然存在的不确定性量” ,这相当于“数学 y / 数学中的不确定性量,通过知道数学 x / 数学而去除”。这证实了互信息的直观含义,即知道任何一个变量提供的关于另一个变量的信息量(即不确定性的减少)。 |