更改

跳到导航 跳到搜索
添加1字节 、 2024年10月11日 (星期五)
第157行: 第157行:     
Here, [math]\tilde{X}[/math] and [math]\tilde{Y}[/math] respectively represent the dependent and dependent variables after intervening [math]X[/math] into a uniform distribution (i.e. prior distribution), while keeping the causal mechanism [math]f[/math] unchanged. It is worth noting that in the literature<ref name='tononi_2008'/>, the author did not explicitly provide the form of the [[KL Divergence]]. In subsequent literature (<ref name='IIT3.0'>{{cite journal|author1=Oizumi M|author2=Albantakis L|author3=Tononi G|year=2014|title=From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0|journal=PLoS Computational Biology|volume=10|number=5|page=e1003588}}</ref>), the author used other symmetry measures related to probability distribution distance, such as the [[Bulldozing Distance]].
 
Here, [math]\tilde{X}[/math] and [math]\tilde{Y}[/math] respectively represent the dependent and dependent variables after intervening [math]X[/math] into a uniform distribution (i.e. prior distribution), while keeping the causal mechanism [math]f[/math] unchanged. It is worth noting that in the literature<ref name='tononi_2008'/>, the author did not explicitly provide the form of the [[KL Divergence]]. In subsequent literature (<ref name='IIT3.0'>{{cite journal|author1=Oizumi M|author2=Albantakis L|author3=Tononi G|year=2014|title=From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0|journal=PLoS Computational Biology|volume=10|number=5|page=e1003588}}</ref>), the author used other symmetry measures related to probability distribution distance, such as the [[Bulldozing Distance]].
 +
 
In fact, [math]ei(f,Y_0)[/math] is the effective information value under a certain [math]Y_0[/math] value. If we take the average of all [math]Y_0[/math] values, we can obtain the effective information in the usual sense, which is the {{EquationNote|1}} equation. To understand this, we first need to introduce the [[Bayesian Formula]], which is:
 
In fact, [math]ei(f,Y_0)[/math] is the effective information value under a certain [math]Y_0[/math] value. If we take the average of all [math]Y_0[/math] values, we can obtain the effective information in the usual sense, which is the {{EquationNote|1}} equation. To understand this, we first need to introduce the [[Bayesian Formula]], which is:
  
2,426

个编辑

导航菜单