第704行: |
第704行: |
| Normalized variants of the mutual information are provided by the coefficients of constraint, uncertainty coefficient or proficiency: | | Normalized variants of the mutual information are provided by the coefficients of constraint, uncertainty coefficient or proficiency: |
| | | |
− | 互信息的规范化变量由约束系数、不确定系数或熟练程度提供:
| + | 互信息的规范化变量由约束系数、不确定系数或熟练程度组成: |
| | | |
| :<math> | | :<math> |
第710行: |
第710行: |
| C_{XY} = \frac{\operatorname{I}(X;Y)}{H(Y)} | | C_{XY} = \frac{\operatorname{I}(X;Y)}{H(Y)} |
| | | |
− | ~~~~\mbox{and}~~~~ | + | ~~~~\mbox{和}~~~~ |
| | | |
| C_{YX} = \frac{\operatorname{I}(X;Y)}{H(X)}. | | C_{YX} = \frac{\operatorname{I}(X;Y)}{H(X)}. |
第722行: |
第722行: |
| The two coefficients have a value ranging in [0, 1], but are not necessarily equal. In some cases a symmetric measure may be desired, such as the following redundancy measure: | | The two coefficients have a value ranging in [0, 1], but are not necessarily equal. In some cases a symmetric measure may be desired, such as the following redundancy measure: |
| | | |
− | 这两个系数的值范围为[0,1] ,但不一定相等。在某些情况下,可能需要一个对称的度量,例如下面的冗余度量:
| + | 这两个系数的值范围均为[0,1],但不一定是相等的。在某些情况下,可能需要一个对称的度量,例如下面的冗余度量: |
| | | |
| :<math>R = \frac{\operatorname{I}(X;Y)}{H(X) + H(Y)}</math> | | :<math>R = \frac{\operatorname{I}(X;Y)}{H(X) + H(Y)}</math> |
第733行: |
第733行: |
| which attains a minimum of zero when the variables are independent and a maximum value of | | which attains a minimum of zero when the variables are independent and a maximum value of |
| | | |
− | 当变量是独立的时候,它达到最小值为零,最大值为
| + | 当变量是独立的时候,它的最小值为零,最大值可以达到: |
| | | |
| | | |
第755行: |
第755行: |
| Another symmetrical measure is the symmetric uncertainty , given by | | Another symmetrical measure is the symmetric uncertainty , given by |
| | | |
− | 另一个对称度量是对称不确定度,由
| + | 另一个对称度量是对称不确定度,由下式表示: |
| | | |
| | | |
第778行: |
第778行: |
| If we consider mutual information as a special case of the total correlation or dual total correlation, the normalized version are respectively, | | If we consider mutual information as a special case of the total correlation or dual total correlation, the normalized version are respectively, |
| | | |
− | 如果我们把互信息看作是总相关或对偶总相关的特殊情况,归一化版本分别为,
| + | 如果我们把互信息看作是'''<font color="#ff8000">总相关 Total correlation</font>'''或'''<font color="#ff8000">对偶总相关 Dual total correlation</font>'''的特殊情况,则其标准化版本分别为, |
| | | |
| :<math>\frac{\operatorname{I}(X;Y)}{\min\left[ H(X),H(Y)\right]}</math> and <math>\frac{\operatorname{I}(X;Y)}{H(X,Y)} \; .</math> | | :<math>\frac{\operatorname{I}(X;Y)}{\min\left[ H(X),H(Y)\right]}</math> and <math>\frac{\operatorname{I}(X;Y)}{H(X,Y)} \; .</math> |
第788行: |
第788行: |
| | | |
| This normalized version also known as Information Quality Ratio (IQR) which quantifies the amount of information of a variable based on another variable against total uncertainty: | | This normalized version also known as Information Quality Ratio (IQR) which quantifies the amount of information of a variable based on another variable against total uncertainty: |
− | 这个标准化版本也被称为信息质量比率(IQR) ,它根据另一个变量量化了一个变量的信息量,以对抗总的不确定性:
| + | 这个标准化版本也被称为'''<font color="#ff8000">信息质量比率 Information Quality Ratio,IQR</font>''' ,它根据另一个变量量化了一个变量的信息量,来对抗总的不确定性: |
| | | |
| :<math>IQR(X, Y) = \operatorname{E}[\operatorname{I}(X;Y)] | | :<math>IQR(X, Y) = \operatorname{E}[\operatorname{I}(X;Y)] |
第805行: |
第805行: |
| There's a normalization which derives from first thinking of mutual information as an analogue to [[covariance]] (thus [[Entropy (information theory)|Shannon entropy]] is analogous to [[variance]]). Then the normalized mutual information is calculated akin to the [[Pearson product-moment correlation coefficient|Pearson correlation coefficient]], | | There's a normalization which derives from first thinking of mutual information as an analogue to [[covariance]] (thus [[Entropy (information theory)|Shannon entropy]] is analogous to [[variance]]). Then the normalized mutual information is calculated akin to the [[Pearson product-moment correlation coefficient|Pearson correlation coefficient]], |
| | | |
− | 有一个标准化的名字——它起源于最初把互信息看作是协方差的类比(因此香农熵类似于方差)。然后计算归一化互信息类似于皮尔逊相关系数,
| + | 有一个标准化的名字——它起源于最初把互信息看作是'''<font color="#ff8000">协方差 Covariance</font>'''的类比(因此香农熵类似于方差)。然后计算归一化互信息类似于'''<font color="#ff8000">皮尔森相关系数 Pearson product-moment</font>''': |
| | | |
| | | |