更改

删除171字节 、 2024年9月9日 (星期一)
无编辑摘要
第319行: 第319行:     
===Normalized Determinism and Degeneracy===
 
===Normalized Determinism and Degeneracy===
在[[Erik Hoel]]等人的原始论文中<ref name="hoel_2013">{{cite journal|last1=Hoel|first1=Erik P.|last2=Albantakis|first2=L.|last3=Tononi|first3=G.|title=Quantifying causal emergence shows that macro can beat micro|journal=Proceedings of the National Academy of Sciences|volume=110|issue=49|page=19790–19795|year=2013|url=https://doi.org/10.1073/pnas.1314922110}}</ref>,作者们定义的确定性和简并性是以归一化的形式呈现的,也就是将确定性和简并性除以了一个与系统尺度有关的量。为了区分,我们将归一化的对应量称为确定性系数和简并性系数。
+
In Erik Hoel's original paper, determinism and degeneracy are presented in a normalized form by dividing them by a scale-dependent quantity. To distinguish between the two, we refer to the normalized quantities as the ''determinism coefficient'' and the ''degeneracy coefficient''.
   −
具体地,[[Erik Hoel]]等人将归一化后的有效信息,即Eff进行分解,分别对应确定性系数(determinism coefficient)和简并性系数(degeneracy coefficient)。
+
Specifically, Eric Hall et al. decomposed the normalized valid information, i.e., Eve, corresponding to the determinism coefficient and the degeneracy coefficient.
    
<math>
 
<math>
第327行: 第327行:  
</math>
 
</math>
   −
这两项的定义分别是:
+
The definitions of these two items are:
    
<math>
 
<math>
第336行: 第336行:  
</math>
 
</math>
   −
总之,确定性指的是,已知当前时刻状态概率分布,对未来可能状态的判断有多大的把握;而简并性指的是,已知当前的状态,追溯历史,我们能有多大确定性做出判断。如果有状态在动力学过程中发生简并,我们回溯历史时能运用的信息就会变少。当一个系统背后的动力学确定性高,同时简并性低时,说明这是一个具有明显因果效应的动力学。
+
In summary, determinism refers to how much confidence we have in predicting future states based on the current state's probability distribution, while degeneracy refers to how much certainty we have in inferring previous states from the current state. Systems with high determinism and low degeneracy exhibit strong causal dynamics.
    
==EI的函数性质==
 
==EI的函数性质==
第664行: 第664行:  
</math>
 
</math>
   −
其中<math>\epsilon</math>和<math>\delta</math>分别表示观测噪音和干预噪音的大小。-->与上述推导类似的推导首见于Hoel2013的文章中<ref name="hoel_2013" />,并在[[神经信息压缩器]]一文中<ref name="zhang_nis">{{cite journal|title=Neural Information Squeezer for Causal Emergence|first1=Jiang|last1=Zhang|first2=Kaiwei|last2=Liu|journal=Entropy|year=2022|volume=25|issue=1|page=26|url=https://api.semanticscholar.org/CorpusID:246275672}}</ref>中进行了详细讨论。
+
其中<math>\epsilon</math>和<math>\delta</math>分别表示观测噪音和干预噪音的大小。-->与上述推导类似的推导首见于Hoel2013的文章中<ref name="hoel_2013">{{cite journal|last1=Hoel|first1=Erik P.|last2=Albantakis|first2=L.|last3=Tononi|first3=G.|title=Quantifying causal emergence shows that macro can beat micro|journal=Proceedings of the National Academy of Sciences|volume=110|issue=49|page=19790–19795|year=2013|url=https://doi.org/10.1073/pnas.1314922110}}</ref>,并在[[神经信息压缩器]]一文中<ref name="zhang_nis">{{cite journal|title=Neural Information Squeezer for Causal Emergence|first1=Jiang|last1=Zhang|first2=Kaiwei|last2=Liu|journal=Entropy|year=2022|volume=25|issue=1|page=26|url=https://api.semanticscholar.org/CorpusID:246275672}}</ref>中进行了详细讨论。
 
===高维情况===
 
===高维情况===
 
我们可以把上述对一维变量的EI计算推广到更一般的n维情景。即:{{NumBlk|:|
 
我们可以把上述对一维变量的EI计算推广到更一般的n维情景。即:{{NumBlk|:|
2,435

个编辑