第8行: |
第8行: |
| In [[information theory]], the '''conditional entropy''' quantifies the amount of information needed to describe the outcome of a [[random variable]] <math>Y</math> given that the value of another random variable <math>X</math> is known. Here, information is measured in [[Shannon (unit)|shannon]]s, [[Nat (unit)|nat]]s, or [[Hartley (unit)|hartley]]s. The ''entropy of <math>Y</math> conditioned on <math>X</math>'' is written as H(X ǀ Y). | | In [[information theory]], the '''conditional entropy''' quantifies the amount of information needed to describe the outcome of a [[random variable]] <math>Y</math> given that the value of another random variable <math>X</math> is known. Here, information is measured in [[Shannon (unit)|shannon]]s, [[Nat (unit)|nat]]s, or [[Hartley (unit)|hartley]]s. The ''entropy of <math>Y</math> conditioned on <math>X</math>'' is written as H(X ǀ Y). |
| | | |
− | 在'''<font color="#ff8000"> 信息论Information theory</font>'''中,假设随机变量<math>X</math>的值已知,那么'''<font color="#ff8000"> 条件熵Conditional entropy</font>'''则用于去定量描述随机变量<math>Y</math>表示的信息量。此时,信息以'''<font color="#ff8000"> 香农Shannon </font>''','''<font color="#ff8000"> 奈特nat</font>'''或'''<font color="#ff8000"> 哈特莱hartley</font>'''来衡量。已知<math>X</math>的条件下<math>Y</math>的熵记为<math>H(X ǀ Y)</math>。 | + | 在'''<font color="#ff8000">信息论 Information theory</font>'''中,假设随机变量<math>X</math>的值已知,那么'''<font color="#ff8000">条件熵 Conditional entropy</font>'''则用于定量描述随机变量<math>Y</math>表示的信息量。此时,信息以'''<font color="#ff8000">香农 Shannon </font>''','''<font color="#ff8000">奈特 nat</font>'''或'''<font color="#ff8000">哈特莱 hartley</font>'''来衡量。已知<math>X</math>的条件下<math>Y</math>的熵记为<math>H(X ǀ Y)</math>。 |
| | | |
| | | |
第32行: |
第32行: |
| where <math>\mathcal X</math> and <math>\mathcal Y</math> denote the [[Support (mathematics)|support sets]] of <math>X</math> and <math>Y</math>. | | where <math>\mathcal X</math> and <math>\mathcal Y</math> denote the [[Support (mathematics)|support sets]] of <math>X</math> and <math>Y</math>. |
| | | |
− | 其中<math>\mathcal X</math>和<math>\mathcal Y</math>表示<math>X</math>和<math>Y</math>的<font color="#32cd32">支撑集</font>。 | + | 其中<math>\mathcal X</math>和<math>\mathcal Y</math>表示<math>X</math>和<math>Y</math>的'''<font color="#ff8000">支撑集 support sets</font>'''。 |
| | | |
| | | |
第163行: |
第163行: |
| If <math>Y</math> is [[Conditional independence|conditionally independent]] of <math>Z</math> given <math>X</math> we have: | | If <math>Y</math> is [[Conditional independence|conditionally independent]] of <math>Z</math> given <math>X</math> we have: |
| | | |
− | 如果给定<math>X</math>,<math>Y</math>有条件地独立于<math>Z</math>,则我们有: | + | 如果给定<math>X</math>,<math>Y</math>有条件地独立于<math>Z</math>,则有: |
| | | |
| | | |
第185行: |
第185行: |
| where <math>\operatorname{I}(X;Y)</math> is the [[mutual information]] between <math>X</math> and <math>Y</math>. | | where <math>\operatorname{I}(X;Y)</math> is the [[mutual information]] between <math>X</math> and <math>Y</math>. |
| | | |
− | 其中<math>\operatorname{I}(X;Y)</math>是<math>X</math>和<math>Y</math>之间的<font color="#ff8000"> 互信息</font>。 | + | 其中<math>\operatorname{I}(X;Y)</math>是<math>X</math>和<math>Y</math>之间的<font color="#ff8000">互信息 mutual information</font>。 |
| | | |
| | | |
第207行: |
第207行: |
| The above definition is for discrete random variables. The continuous version of discrete conditional entropy is called ''conditional differential (or continuous) entropy''. Let <math>X</math> and <math>Y</math> be a continuous random variables with a [[joint probability density function]] <math>f(x,y)</math>. The differential conditional entropy <math>h(X|Y)</math> is defined as<ref name=cover1991 />{{rp|249}} | | The above definition is for discrete random variables. The continuous version of discrete conditional entropy is called ''conditional differential (or continuous) entropy''. Let <math>X</math> and <math>Y</math> be a continuous random variables with a [[joint probability density function]] <math>f(x,y)</math>. The differential conditional entropy <math>h(X|Y)</math> is defined as<ref name=cover1991 />{{rp|249}} |
| | | |
− | 上面的定义是针对离散随机变量的。离散条件熵的连续形式称为'''<font color="#ff8000"> 条件微分(或连续)熵Conditional differential (or continuous) entropy </font>'''。 令<math>X</math>和<math>Y</math>为具有联合概率密度函数<math>f(x,y)</math>的连续随机变量。则微分条件熵<math>h(X|Y)</math>定义为:<ref name=cover1991 />{{rp|249}} | + | 上面的定义是针对离散随机变量的。离散条件熵的连续形式称为'''<font color="#ff8000">条件微分(或连续)熵 Conditional differential (or continuous) entropy </font>'''。 令<math>X</math>和<math>Y</math>为具有联合概率密度函数<math>f(x,y)</math>的连续随机变量。则微分条件熵<math>h(X|Y)</math>定义为:<ref name=cover1991 />{{rp|249}} |
| | | |
| {{Equation box 1 | | {{Equation box 1 |
第286行: |
第286行: |
| * [[Likelihood function]] | | * [[Likelihood function]] |
| | | |
− | * '''<font color="#ff8000"> 熵(信息论)Entropy (information theory)</font>''' | + | * '''<font color="#ff8000">熵(信息论)Entropy (information theory)</font>''' |
− | * '''<font color="#ff8000"> 互信息Mutual information</font>''' | + | * '''<font color="#ff8000">互信息 Mutual information</font>''' |
− | * '''<font color="#ff8000"> 条件量子熵Conditional quantum entropy</font>''' | + | * '''<font color="#ff8000">条件量子熵 Conditional quantum entropy</font>''' |
− | * '''<font color="#ff8000"> 信息差异Variation of information</font>''' | + | * '''<font color="#ff8000">信息差异 Variation of information</font>''' |
− | * '''<font color="#ff8000"> 熵幂不等式Entropy power inequality</font>''' | + | * '''<font color="#ff8000">熵幂不等式 Entropy power inequality</font>''' |
− | * '''<font color="#ff8000"> 似然函数Likelihood function</font>''' | + | * '''<font color="#ff8000">似然函数 Likelihood function</font>''' |
| | | |
| | | |