更改

跳到导航 跳到搜索
添加111字节 、 2024年10月2日 (星期三)
无编辑摘要
第158行: 第158行:  
Here, [math]\tilde{X}[/math] and [math]\tilde{Y}[/math] respectively represent the dependent and dependent variables after intervening [math]X[/math] into a uniform distribution (i.e. prior distribution), while keeping the causal mechanism [math]f[/math] unchanged. It is worth noting that in the literature<ref name='tononi_2008'/>, the author did not explicitly provide the form of the [[KL Divergence]]. In subsequent literature (<ref name='IIT3.0'>{{cite journal|author1=Oizumi M|author2=Albantakis L|author3=Tononi G|year=2014|title=From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0|journal=PLoS Computational Biology|volume=10|number=5|page=e1003588}}</ref>), the author used other symmetry measures related to probability distribution distance, such as the [[Bulldozing Distance]].
 
Here, [math]\tilde{X}[/math] and [math]\tilde{Y}[/math] respectively represent the dependent and dependent variables after intervening [math]X[/math] into a uniform distribution (i.e. prior distribution), while keeping the causal mechanism [math]f[/math] unchanged. It is worth noting that in the literature<ref name='tononi_2008'/>, the author did not explicitly provide the form of the [[KL Divergence]]. In subsequent literature (<ref name='IIT3.0'>{{cite journal|author1=Oizumi M|author2=Albantakis L|author3=Tononi G|year=2014|title=From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0|journal=PLoS Computational Biology|volume=10|number=5|page=e1003588}}</ref>), the author used other symmetry measures related to probability distribution distance, such as the [[Bulldozing Distance]].
 
In fact, [math]ei(f,Y_0)[/math] is the effective information value under a certain [math]Y_0[/math] value. If we take the average of all [math]Y_0[/math] values, we can obtain the effective information in the usual sense, which is the {{EquationNote|1}} equation. To understand this, we first need to introduce the [[Bayesian Formula]], which is:
 
In fact, [math]ei(f,Y_0)[/math] is the effective information value under a certain [math]Y_0[/math] value. If we take the average of all [math]Y_0[/math] values, we can obtain the effective information in the usual sense, which is the {{EquationNote|1}} equation. To understand this, we first need to introduce the [[Bayesian Formula]], which is:
      
<math>
 
<math>
第164行: 第163行:  
</math>
 
</math>
   −
这里的[math]\tilde{Y_0}\equiv (\tilde{Y}=Y_0)[/math].
+
Here, [math]\tilde{Y_0}\equiv (\tilde{Y}=Y_0)[/math].
注意,这里的条件概率[math]P(\tilde{Y_0}|\tilde{X})[/math]事实上就是因果机制[math]f[/math],进一步,把它代入[math]ei(f,Y_0)[/math]的公式,我们不难得到:
+
Please note that the conditional probability [math]P(\tilde{Y_0}|\tilde{X})[/math] here is actually the causal mechanism [math]f[/math]. Furthermore, by substituting it into the formula [math]ei(f,Y_0)[/math], we can easily obtain:
 
  −
 
      
<math>
 
<math>
第173行: 第170行:  
</math>
 
</math>
   −
将上式对所有的[math]\tilde{Y_0}[/math]值求期望,可以得到:
+
By taking the expected value of [math]\tilde{Y_0}[/math] for all values in the above equation, we can obtain:
    
<math>
 
<math>
第179行: 第176行:  
</math>
 
</math>
   −
对于[math]ei[/math]的引入,有助于我们理解某一个局部的因果机制是如何改变原始变量的分布的,或者用[[Tononi]]的语言来说,这是一种机制的信息产生,详见文章<ref name=tononi_2008 />[[整合信息论]]
+
The introduction of [math]ei[/math] helps us understand how a local causal mechanism changes the distribution of the original variable, or in [[Tononi]]'s language, it is a mechanism of information generation, as detailed in the article<ref name=tononi_2008 /> or the [[Integrated Information Theory]].
    
=Effective Information of Markov Chains=
 
=Effective Information of Markov Chains=
2,426

个编辑

导航菜单