更改

跳到导航 跳到搜索
删除103字节 、 2024年9月18日 (星期三)
无编辑摘要
第228行: 第228行:  
P=(P_1,P_2,\cdots,P_N)^T
 
P=(P_1,P_2,\cdots,P_N)^T
 
</math>
 
</math>
其中,[math]P_i[/math]矩阵[math]P[/math]的第[math]i[/math]个行向量,且满足条件概率的归一化条件:[math]||P_i||_1=1[/math],这里的[math]||\cdot||_1[/math]表示向量的1范数。那么EI可以写成如下的形式:
+
 
 
Where [math]P_i[/math] is the [math]i[/math]-th row vector of matrix [math]P[/math], and it satisfies the normalization condition for conditional probabilities: [math]||P_i||_1=1[/math], where [math]||\cdot||_1[/math] denotes the 1-norm of a vector. Then, EI can be written as follows:
 
Where [math]P_i[/math] is the [math]i[/math]-th row vector of matrix [math]P[/math], and it satisfies the normalization condition for conditional probabilities: [math]||P_i||_1=1[/math], where [math]||\cdot||_1[/math] denotes the 1-norm of a vector. Then, EI can be written as follows:
   第559行: 第559行:  
|}
 
|}
   −
In this example, the microstate transition matrix is a 4x4 matrix, where the first three states transition to each other with a probability of 1/3. This leads to a transition matrix with relatively low determinism, and thus, the EI is not very high, with a value of 0.81. However, when we coarse-grain this matrix—merging the first three states into one macrostate a, and the last state becomes another macrostate b—all transitions between the original three microstates now become internal transitions within macrostate a. Thus, the transition probability matrix becomes PM​, with an EI of 1. In this case, the causal emergence can be measured as:
+
In this example, the microstate transition matrix is a 4*4 matrix, where the first three states transition to each other with a probability of 1/3. This leads to a transition matrix with relatively low determinism, and thus, the EI is not very high, with a value of 0.81. However, when we coarse-grain this matrix—merging the first three states into one macrostate a, and the last state becomes another macrostate b—all transitions between the original three microstates now become internal transitions within macrostate a. Thus, the transition probability matrix becomes [math]P_M[/math], with an EI of 1. In this case, the [[Causal Emergence]] can be measured as:
    
<math>
 
<math>
第567行: 第567行:  
That is, there is 0.19 bits of causal emergence.
 
That is, there is 0.19 bits of causal emergence.
   −
Sometimes, causal emergence is calculated using the normalized EI, defined as:
+
Sometimes, [[Causal Emergence]] is calculated using the normalized EI, defined as:
    
<math>
 
<math>
第573行: 第573行:  
</math>
 
</math>
   −
Since normalized EI eliminates the effect of system size, the measure of causal emergence becomes larger.<!--[[文件:Example1.png|815x815px|无框|居中]]
+
Since normalized EI eliminates the effect of system size, the measure of causal emergence becomes larger.
 +
<!--[[文件:Example1.png|815x815px|无框|居中]]
    
The above figure shows the transition probability matrices of several Markov chains, where (a) has high determinacy and low degeneracy, resulting in a relatively high overall eff. (b) Then both determinacy and degeneracy are relatively high, so eff is 0. (c) Compared to (a) which has lower certainty, (d) also has higher certainty and degeneracy, resulting in lower eff. Both can be obtained through the same coarsening strategy (merging the first four states into one) to obtain (e). At this point, the certainty of (e) is high and there is no degeneracy, so the eff of (e) is higher than that of (c) and (d).-->
 
The above figure shows the transition probability matrices of several Markov chains, where (a) has high determinacy and low degeneracy, resulting in a relatively high overall eff. (b) Then both determinacy and degeneracy are relatively high, so eff is 0. (c) Compared to (a) which has lower certainty, (d) also has higher certainty and degeneracy, resulting in lower eff. Both can be obtained through the same coarsening strategy (merging the first four states into one) to obtain (e). At this point, the certainty of (e) is high and there is no degeneracy, so the eff of (e) is higher than that of (c) and (d).-->
 +
 
==Python Source Code for Calculating EI==
 
==Python Source Code for Calculating EI==
Below is the Python source code for calculating EI for a Markov transition matrix. The input <code>tpm</code> is a Markov transition matrix that satisfies the row normalization condition. The returned values are <code>ei_all</code>, which is the EI, and other parameters such as effectiveness (<code>eff</code>), determinism (<code>det</code>), degeneracy (<code>deg</code>), determinism coefficient (<code>det_c</code>), and degeneracy coefficient (<code>deg_c</code>).
      +
Below is the Python source code for calculating EI for a Markov transition matrix. The input <code>tpm</code> is a Markov transition matrix that satisfies the row normalization condition. The returned values are <code>ei_all</code>, which is the EI, and other parameters such as effectiveness (<code>eff</code>), determinism (<code>det</code>), degeneracy (<code>deg</code>), '''Determinism Coefficient''' (<code>det_c</code>), and '''Degeneracy Coefficient''' (<code>deg_c</code>).
 
python:<syntaxhighlight lang="python3">
 
python:<syntaxhighlight lang="python3">
 
def tpm_ei(tpm, log_base = 2):
 
def tpm_ei(tpm, log_base = 2):
第624行: 第626行:  
tpm_ei(mi_states)
 
tpm_ei(mi_states)
 
</syntaxhighlight>
 
</syntaxhighlight>
 +
 
=EI for Continuous Variables=
 
=EI for Continuous Variables=
 +
 
In reality, most systems need to be considered in continuous spaces, so it is necessary to extend the concept of EI to continuous systems.
 
In reality, most systems need to be considered in continuous spaces, so it is necessary to extend the concept of EI to continuous systems.
   −
The core idea of this extension is to simplify the causal mechanism in continuous space into a deterministic function mapping f(X), combined with a noise variable ξ. In the cases listed below, ξ∼N(0,Σ), meaning it follows a Gaussian distribution, allowing us to obtain an analytical expression for EI. For more general cases, there is no literature available for discussion yet.
+
The core idea of this extension is to simplify the causal mechanism in continuous space into a deterministic function mapping [math]f(X)[/math], combined with a noise variable [math]\xi[/math]. In the cases listed below, [math]\xi\sim \mathcal{N}(0,\Sigma)[/math], meaning it follows a Gaussian distribution, allowing us to obtain an analytical expression for EI. For more general cases, there is no literature available for discussion yet.
 +
 
    
==Random Function Mapping==
 
==Random Function Mapping==
Initially, Erik Hoel considered this and proposed the framework of causal geometry. This framework not only discusses the calculation of EI for random function mappings but also introduces the concepts of intervention noise and causal geometry. It defines the local form of EI and draws analogies and comparisons with information geometry. Below, we will discuss one-dimensional and multi-dimensional function mappings and the local form of EI.
+
Initially, Erik Hoel considered this and proposed the framework of [[Causal Geometry]]<ref name=Chvykov_causal_geometry />. This framework not only discusses the calculation of EI for random function mappings but also introduces the concepts of intervention noise and [[Causal Geometry]]. It defines the local form of EI and draws analogies and comparisons with [[Information Geometry]]. Below, we will discuss one-dimensional and multi-dimensional function mappings and the local form of EI.
 +
 
 
===One-Dimensional Function Mapping===
 
===One-Dimensional Function Mapping===
 
First, let's consider the simplest case:
 
First, let's consider the simplest case:
2,365

个编辑

导航菜单