更改

跳到导航 跳到搜索
添加53字节 、 2020年10月28日 (三) 14:56
第47行: 第47行:     
== Motivation ==
 
== Motivation ==
Let H(Y ǀ X = x) be the [[Shannon Entropy|entropy]] of the discrete random variable <math>Y</math> conditioned on the discrete random variable <math>X</math> taking a certain value <math>x</math>. Denote the support sets of <math>X</math> and <math>Y</math> by <math>\mathcal X</math> and <math>\mathcal Y</math>. Let <math>Y</math> have [[probability mass function]] <math>p_Y{(y)}</math>. The unconditional entropy of <math>Y</math> is calculated as H(Y):=E[I(Y), i.e.
+
Let <math>H(Y ǀ X = x)</math> be the [[Shannon Entropy|entropy]] of the discrete random variable <math>Y</math> conditioned on the discrete random variable <math>X</math> taking a certain value <math>x</math>. Denote the support sets of <math>X</math> and <math>Y</math> by <math>\mathcal X</math> and <math>\mathcal Y</math>. Let <math>Y</math> have [[probability mass function]] <math>p_Y{(y)}</math>. The unconditional entropy of <math>Y</math> is calculated as H(Y):=E[I(Y), i.e.
   −
设H(Y ǀ X = x)为离散随机变量<math>Y</math>的熵,条件是离散随机变量<math>X</math>取一定值<math>x</math>。用<math>\mathcal X</math>和<math>\mathcal Y</math>表示<math>X</math>和<math>Y</math>的支撑集。令<math>Y</math>具有概率质量函数<math>p_Y{(y)}</math>。<math>Y</math>的无条件熵计算为H(Y):=E[I(Y)。
+
设<math>H(Y ǀ X = x)</math>为离散随机变量<math>Y</math>的熵,条件是离散随机变量<math>X</math>取一定值<math>x</math>。用<math>\mathcal X</math>和<math>\mathcal Y</math>表示<math>X</math>和<math>Y</math>的支撑集。令<math>Y</math>具有概率质量函数<math>p_Y{(y)}</math>。<math>Y</math>的无条件熵计算为H(Y):=E[I(Y)。
      第66行: 第66行:       −
Note that H(Y ǀ X) is the result of averaging H(Y ǀ X = x) over all possible values <math>x</math> that <math>X</math> may take. Also, if the above sum is taken over a sample <math>y_1, \dots, y_n</math>, the expected value <math>E_X[ H(y_1, \dots, y_n \mid X = x)]</math> is known in some domains as '''equivocation'''.<ref>{{cite journal|author1=Hellman, M.|author2=Raviv, J.|year=1970|title=Probability of error, equivocation, and the Chernoff bound|journal=IEEE Transactions on Information Theory|volume=16|issue=4|pp=368-372}}</ref>
+
Note that<math> H(Y ǀ X)</math> is the result of averaging H(Y ǀ X = x) over all possible values <math>x</math> that <math>X</math> may take. Also, if the above sum is taken over a sample <math>y_1, \dots, y_n</math>, the expected value <math>E_X[ H(y_1, \dots, y_n \mid X = x)]</math> is known in some domains as '''equivocation'''.<ref>{{cite journal|author1=Hellman, M.|author2=Raviv, J.|year=1970|title=Probability of error, equivocation, and the Chernoff bound|journal=IEEE Transactions on Information Theory|volume=16|issue=4|pp=368-372}}</ref>
   −
注意,H(Y ǀ X)是在<math>X</math>可能取的所有可能值<math>x</math>上对H(Y ǀ X = x)求平均值的结果。同样,如果将上述总和接管到样本<math>y_1, \dots, y_n</math>上,则预期值<math>E_X[ H(y_1, \dots, y_n \mid X = x)]</math>在某些领域中会变得模糊。<ref>{{cite journal|author1=Hellman, M.|author2=Raviv, J.|year=1970|title=Probability of error, equivocation, and the Chernoff bound|journal=IEEE Transactions on Information Theory|volume=16|issue=4|pp=368-372}}</ref>
+
注意,<math> H(Y ǀ X)</math>是在<math>X</math>可能取的所有可能值<math>x</math>上对H(Y ǀ X = x)求平均值的结果。同样,如果将上述总和接管到样本<math>y_1, \dots, y_n</math>上,则预期值<math>E_X[ H(y_1, \dots, y_n \mid X = x)]</math>在某些领域中会变得模糊。<ref>{{cite journal|author1=Hellman, M.|author2=Raviv, J.|year=1970|title=Probability of error, equivocation, and the Chernoff bound|journal=IEEE Transactions on Information Theory|volume=16|issue=4|pp=368-372}}</ref>
     
961

个编辑

导航菜单