更改

跳到导航 跳到搜索
添加384字节 、 2024年10月2日 (星期三)
无编辑摘要
第148行: 第148行:  
==Effective Information as the Distribution Difference==
 
==Effective Information as the Distribution Difference==
   −
在文献<ref name='tononi_2008'>{{cite journal|author=GIULIO TONONI|title=Consciousness as Integrated Information: a Provisional Manifesto|journal=Biol. Bull.|volume=215|page=216–242|year=2008}}</ref>中,作者用另一种方式定义了有效信息。这种新形式的有效信息依赖于果变量(Y)的状态,即干预[math]X[/math]为均匀分布以后的[math]\tilde{Y}[/math]的状态为给定的值[math]Y_0[/math]。在这一条件下,有效信息定义为两种分布的[[KL散度]],这两种概率分布分别是因变量[math]X[/math]的先验分布,即[math]\mathcal{X}[/math]上的均匀分布[math]U[/math],以及在从X到Y的因果机制f的作用下,导致果变量[math]Y[/math]变成了另一个变量[math]\tilde{Y}[/math],那么以观察到这一果变量[math]Y[/math]取值为[math]Y_0[/math]为条件,我们可以反过来推断出因变量[math]\tilde{X}[/math]的后验分布,即[math]P(\tilde{X}|\tilde{Y}=Y_0,f)[/math]
+
In the literature<ref name='tononi_2008'>{{cite journal|author=GIULIO TONONI|title=Consciousness as Integrated Information: a Provisional Manifesto|journal=Biol. Bull.|volume=215|page=216–242|year=2008}}</ref>, the author defines valid information in another way. This new form of effective information depends on the state of the outcome variable (Y), that is, the state of [math]\tilde{Y}[/math] after intervening [math]X[/math] to be uniformly distributed is the given value [math]Y_0[/math]. Under this condition, effective information is defined as the [[KL Divergence]] of two probability distributions, which are the prior distributions of the dependent variable [math]X[/math], i.e. the uniform distribution [math]U[/math] on [math]\mathcal{X}[/math], and the causal mechanism f from X to Y, which causes the dependent variable [math]Y[/math] to become another variable [math]\tilde{Y}[/math]. Therefore, based on the observation that the value of this dependent variable [math]Y[/math] is [math]Y_0[/math], we can infer in reverse that The posterior distribution of the dependent variable [math]\tilde{X}[/math], i.e. [math]P(\tilde{X}|\tilde{Y}=Y_0,f)[/math].
   −
那么,这种先验的概率分布和后验的概率分布就会产生一个差异,这个差异就是由因果机制f产生的有效信息,可以定义为:
+
So, there will be a difference between the prior probability distribution and the posterior probability distribution, which is the effective information generated by the causal mechanism f, and can be defined as:
    
<math>
 
<math>
第156行: 第156行:  
</math>
 
</math>
   −
这里,[math]\tilde{X}[/math][math]\tilde{Y}[/math]分别表示将[math]X[/math]干预成均匀分布后(即先验分布),在因果机制[math]f[/math]保持不变的前提下的因变量和果变量。值得注意的是,在文献<ref name='tononi_2008'/>中,作者没有明确给出[[KL散度]]的形式,在后续的文献中(整合信息论3.0版本<ref name='IIT3.0'>{{cite journal|author1=Oizumi M|author2=Albantakis L|author3=Tononi G|year=2014|title=From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0|journal=PLoS Computational Biology|volume=10|number=5|page=e1003588}}</ref>),作者使用了其它有关概率分布距离的对称性度量形式,例如[[推土距离]]
+
Here, [math]\tilde{X}[/math] and [math]\tilde{Y}[/math] respectively represent the dependent and dependent variables after intervening [math]X[/math] into a uniform distribution (i.e. prior distribution), while keeping the causal mechanism [math]f[/math] unchanged. It is worth noting that in the literature<ref name='tononi_2008'/>, the author did not explicitly provide the form of the [[KL Divergence]]. In subsequent literature (<ref name='IIT3.0'>{{cite journal|author1=Oizumi M|author2=Albantakis L|author3=Tononi G|year=2014|title=From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0|journal=PLoS Computational Biology|volume=10|number=5|page=e1003588}}</ref>), the author used other symmetry measures related to probability distribution distance, such as the [[Bulldozing Distance]].
 +
In fact, [math]ei(f,Y_0)[/math] is the effective information value under a certain [math]Y_0[/math] value. If we take the average of all [math]Y_0[/math] values, we can obtain the effective information in the usual sense, which is the {{EquationNote|1}} equation. To understand this, we first need to introduce the [[Bayesian Formula]], which is:
   −
事实上,[math]ei(f,Y_0)[/math]是某一个[math]Y_0[/math]取值下的有效信息值,如果我们对所有的[math]Y_0[/math]求平均,则可以得到通常意义下的有效信息,即{{EquationNote|1}}式。要理解这一点,首先我们需要引入[[贝叶斯公式]],即:
      
<math>
 
<math>
第166行: 第166行:  
这里的[math]\tilde{Y_0}\equiv (\tilde{Y}=Y_0)[/math].
 
这里的[math]\tilde{Y_0}\equiv (\tilde{Y}=Y_0)[/math].
 
注意,这里的条件概率[math]P(\tilde{Y_0}|\tilde{X})[/math]事实上就是因果机制[math]f[/math],进一步,把它代入[math]ei(f,Y_0)[/math]的公式,我们不难得到:
 
注意,这里的条件概率[math]P(\tilde{Y_0}|\tilde{X})[/math]事实上就是因果机制[math]f[/math],进一步,把它代入[math]ei(f,Y_0)[/math]的公式,我们不难得到:
 +
 +
    
<math>
 
<math>
2,426

个编辑

导航菜单