更改

跳到导航 跳到搜索
添加168字节 、 2024年9月9日 (星期一)
无编辑摘要
第501行: 第501行:  
Here, P is the microstate Markov transition matrix with dimensions N×N, where N is the number of microstates. P′ is the macro-state transition matrix obtained after coarse-graining, with dimensions M×M, where M<N represents the number of macrostates.
 
Here, P is the microstate Markov transition matrix with dimensions N×N, where N is the number of microstates. P′ is the macro-state transition matrix obtained after coarse-graining, with dimensions M×M, where M<N represents the number of macrostates.
   −
关于如何对马尔科夫概率转移矩阵实施粗粒化的方法,往往体现为两步:1、对微观状态做归并,将N个微观态,归并为M个宏观态;2、对马尔科夫转移矩阵做约简。关于具体的粗粒化马尔科夫链的方法,请参考[[马尔科夫链的粗粒化]]。
+
The process of coarse-graining a Markov transition matrix typically involves two steps: 1) grouping N microstates into M macrostates, and 2) reducing the Markov transition matrix accordingly. For more details on the specific methods for coarse-graining a Markov chain, refer to the topic of Markov chain coarse-graining.
   −
如果计算得出的CE>0,则称该系统发生了[[因果涌现]],否则没有发生。
+
If the computed CE>0, the system is said to exhibit causal emergence; otherwise, it does not.
   −
下面,我们展示一个具体的因果涌现的例子:
+
Below, we demonstrate a specific example of causal emergence:
 
{|
 
{|
|+马尔科夫链示例
+
|+Example of Causal Emergence in a Markov Chain
 
|-
 
|-
 
|<math>
 
|<math>
第525行: 第525行:  
|-
 
|-
 
|[math]\begin{aligned}&Det(P_m)=0.81\ bits,\\&Deg(P_m)=0\ bits,\\&EI(P_m)=0.81\ bits\end{aligned}[/math]||[math]\begin{aligned}&Det(P_M)=1\ bits,\\&Deg(P_M)=0\ bits,\\&EI(P_M)=1\ bits\end{aligned}[/math]
 
|[math]\begin{aligned}&Det(P_m)=0.81\ bits,\\&Deg(P_m)=0\ bits,\\&EI(P_m)=0.81\ bits\end{aligned}[/math]||[math]\begin{aligned}&Det(P_M)=1\ bits,\\&Deg(P_M)=0\ bits,\\&EI(P_M)=1\ bits\end{aligned}[/math]
|}在这个例子中,微观态的转移矩阵是一个4*4的矩阵,其中前三个状态彼此以1/3的概率相互转移,这导致该转移矩阵具有较小的确定性,因此EI也不是很大为0.81。然而,当我们对该矩阵进行粗粒化,也就是把前三个状态合并为一个状态a,而最后一个状态转变为一个宏观态b。这样所有的原本三个微观态彼此之间的转移就变成了宏观态a到a内部的转移了。因此,转移概率矩阵也就变成了[math]P_M[/math],它的EI为1。在这个例子中,可以计算它的[[因果涌现度量]]为:
+
|}
 +
 
 +
 
 +
In this example, the microstate transition matrix is a 4x4 matrix, where the first three states transition to each other with a probability of 1/3. This leads to a transition matrix with relatively low determinism, and thus, the EI is not very high, with a value of 0.81. However, when we coarse-grain this matrix—merging the first three states into one macrostate a, and the last state becomes another macrostate b—all transitions between the original three microstates now become internal transitions within macrostate a. Thus, the transition probability matrix becomes PM​, with an EI of 1. In this case, the causal emergence can be measured as:
    
<math>
 
<math>
第531行: 第534行:  
</math>
 
</math>
   −
即存在着0.19比特的因果涌现。
+
That is, there is 0.19 bits of causal emergence.
   −
有时,我们也会根据归一化的EI来计算[[因果涌现度量]],即:
+
Sometimes, causal emergence is calculated using the normalized EI, defined as:
    
<math>
 
<math>
第539行: 第542行:  
</math>
 
</math>
   −
由此可见,由于归一化的EI消除了系统尺寸的影响,因此因果涌现度量更大。<!--[[文件:Example1.png|815x815px|无框|居中]]
+
Since normalized EI eliminates the effect of system size, the measure of causal emergence becomes larger.<!--[[文件:Example1.png|815x815px|无框|居中]]
    
上图展示了几种马尔科夫链的转移概率矩阵,其中(a)是确定性高,简并性低,所以整体eff比较高。(b)则是确定性和简并性都比较高,所以eff是0。(c)相比于(a)确定性更低,(d)也是确定性和简并性都较高导致eff较低,它们都可以通过同一种粗粒化策略(将前4个状态合并为一个状态)来得到(e)。此时(e)确定性很高,无简并性,所以(e)的eff比(c)(d)要高。-->
 
上图展示了几种马尔科夫链的转移概率矩阵,其中(a)是确定性高,简并性低,所以整体eff比较高。(b)则是确定性和简并性都比较高,所以eff是0。(c)相比于(a)确定性更低,(d)也是确定性和简并性都较高导致eff较低,它们都可以通过同一种粗粒化策略(将前4个状态合并为一个状态)来得到(e)。此时(e)确定性很高,无简并性,所以(e)的eff比(c)(d)要高。-->
==计算EI的源代码==
+
==Python Source Code for Calculating EI==
这是计算一个马尔科夫概率转移矩阵的Python源代码。输入tpm为一个满足行归一化条件的马尔科夫概率转移矩阵,返回的ei_all为其EI值,eff为有效性,det,deg分别为确定性和简并性,det_c,deg_c分别为'''确定性系数'''和'''简并性系数'''。
+
Below is the Python source code for calculating EI for a Markov transition matrix. The input <code>tpm</code> is a Markov transition matrix that satisfies the row normalization condition. The returned values are <code>ei_all</code>, which is the EI, and other parameters such as effectiveness (<code>eff</code>), determinism (<code>det</code>), degeneracy (<code>deg</code>), determinism coefficient (<code>det_c</code>), and degeneracy coefficient (<code>deg_c</code>).
    
python:<syntaxhighlight lang="python3">
 
python:<syntaxhighlight lang="python3">
第590行: 第593行:  
tpm_ei(mi_states)
 
tpm_ei(mi_states)
 
</syntaxhighlight>
 
</syntaxhighlight>
=连续变量的EI=
+
=EI for Continuous Variables=
现实中大部分系统都要在连续空间上考虑,所以很有必要将EI的概念拓展到连续系统上。
+
In reality, most systems need to be considered in continuous spaces, so it is necessary to extend the concept of EI to continuous systems.
 +
 
 +
The core idea of this extension is to simplify the causal mechanism in continuous space into a deterministic function mapping f(X), combined with a noise variable ξ. In the cases listed below, ξ∼N(0,Σ), meaning it follows a Gaussian distribution, allowing us to obtain an analytical expression for EI. For more general cases, there is no literature available for discussion yet.
   −
这种扩展的核心思想是将连续空间中的因果机制简化为一个确定性的函数映射[math]f(X)[/math]再加上一个噪声随机变量[math]\xi[/math]。而且,在下面列举的情形中,[math]\xi\sim \mathcal{N}(0,\Sigma)[/math],即满足高斯分布,这样便可求得EI的解析表达式。对于更一般的情况,尚没有文献进行讨论。
+
==Random Function Mapping==
==随机函数映射==
+
Initially, Erik Hoel considered this and proposed the framework of causal geometry. This framework not only discusses the calculation of EI for random function mappings but also introduces the concepts of intervention noise and causal geometry. It defines the local form of EI and draws analogies and comparisons with information geometry. Below, we will discuss one-dimensional and multi-dimensional function mappings and the local form of EI.
最初Erik Hoel考虑到了这一点,提出了[[因果几何]]<ref name="Chvykov_causal_geometry">{{cite journal|author1=Chvykov P|author2=Hoel E.|title=Causal Geometry|journal=Entropy|year=2021|volume=23|issue=1|page=24|url=https://doi.org/10.3390/e2}}</ref>框架,它不仅率先讨论了随机函数映射的EI计算问题,同时还引入了干预噪音和[[因果几何]]的概念,并定义了EI的局部形式,并将这种形式与[[信息几何]]进行了对照和类比。下面,我们分别从一维函数映射、多维函数映射,和EI的局部形式来分别进行讨论。
   
===一维函数映射===
 
===一维函数映射===
 
首先,我们考虑最简单的情况:
 
首先,我们考虑最简单的情况:
2,435

个编辑

导航菜单