第377行: |
第377行: |
| \begin{align} | | \begin{align} |
| | | |
− | d(X,Y) &= \Eta(X,Y) - \operatorname{I}(X;Y) \\ | + | d(X,Y) &= H(X,Y) - \operatorname{I}(X;Y) \\ |
| | | |
− | &= \Eta(X) + \Eta(Y) - 2\operatorname{I}(X;Y) \\ | + | &= H(X) + H(Y) - 2\operatorname{I}(X;Y) \\ |
| | | |
− | &= \Eta(X|Y) + \Eta(Y|X) | + | &= H(X|Y) + H(Y|X) |
| | | |
| \end{align} | | \end{align} |
第398行: |
第398行: |
| | | |
| | | |
− | If <math>X, Y</math> are discrete random variables then all the entropy terms are non-negative, so <math>0 \le d(X,Y) \le \Eta(X,Y)</math> and one can define a normalized distance | + | If <math>X, Y</math> are discrete random variables then all the entropy terms are non-negative, so <math>0 \le d(X,Y) \le H(X,Y)</math> and one can define a normalized distance |
| | | |
− | If <math>X, Y</math> are discrete random variables then all the entropy terms are non-negative, so <math>0 \le d(X,Y) \le \Eta(X,Y)</math> and one can define a normalized distance | + | If <math>X, Y</math> are discrete random variables then all the entropy terms are non-negative, so <math>0 \le d(X,Y) \le H(X,Y)</math> and one can define a normalized distance |
| | | |
| 如果数学 x,y / math 是离散随机变量,那么所有的熵项都是非负的,所以数学0 le d (x,y) le Eta (x,y) / math 可以定义一个标准化距离 | | 如果数学 x,y / math 是离散随机变量,那么所有的熵项都是非负的,所以数学0 le d (x,y) le Eta (x,y) / math 可以定义一个标准化距离 |
第408行: |
第408行: |
| | | |
| | | |
− | :<math>D(X,Y) = \frac{d(X, Y)}{\Eta(X, Y)} \le 1.</math> | + | :<math>D(X,Y) = \frac{d(X, Y)}{H(X, Y)} \le 1.</math> |
| | | |
| | | |
第433行: |
第433行: |
| | | |
| | | |
− | :<math>D(X,Y) = 1 - \frac{\operatorname{I}(X; Y)}{\Eta(X, Y)}.</math> | + | :<math>D(X,Y) = 1 - \frac{\operatorname{I}(X; Y)}{H(X, Y)}.</math> |
| | | |
| | | |
第457行: |
第457行: |
| | | |
| | | |
− | :<math>D^\prime(X, Y) = 1 - \frac{\operatorname{I}(X; Y)}{\max\left\{\Eta(X), \Eta(Y)\right\}}</math> | + | :<math>D^\prime(X, Y) = 1 - \frac{\operatorname{I}(X; Y)}{\max\left\{H(X), H(Y)\right\}}</math> |
| | | |
| | | |