更改

跳到导航 跳到搜索
添加61字节 、 2024年9月17日 (星期二)
无编辑摘要
第142行: 第142行:  
|{{EquationRef|1}}}}
 
|{{EquationRef|1}}}}
   −
It is easy to see that the final equation tells us that EI is actually composed of two terms: the first term is the average of the negative entropy of each row of the causal mechanism matrix, and the second term is the entropy of the variable Y. In the first term, the probability distribution Pr(X=x) of X acts as the weight when averaging the entropy of each row. Only when we set this weight to be the same value (i.e., intervene to make X uniformly distributed) can we treat each row of the causal mechanism matrix equally.
+
It is not difficult to see that the final equation tells us that EI is actually composed of two terms: the first term is the average of the negative entropy of each row of the causal mechanism matrix, and the second term is the entropy of the variable [math]Y[/math]. In the first term, the probability distribution [math]Pr(X=x)[/math] of [math]X[/math] acts as the weight when averaging the entropy of each row. Only when we set this weight to be the same value (i.e., intervene to make [math]X[/math] uniformly distributed) can we treat each row of the causal mechanism matrix equally.
    
If the distribution is not uniform, some rows will be assigned a larger weight, while others will be given a smaller weight. This weight represents a certain "bias," which prevents the EI from reflecting the natural properties of the causal mechanism.
 
If the distribution is not uniform, some rows will be assigned a larger weight, while others will be given a smaller weight. This weight represents a certain "bias," which prevents the EI from reflecting the natural properties of the causal mechanism.
2,365

个编辑

导航菜单