第340行: |
第340行: |
| ==Properties of the EI Function== | | ==Properties of the EI Function== |
| From Equation 2, we can see that in the transition probability matrix P, EI is a function of each element, representing the conditional probabilities of transitioning from one state to another. Thus, a natural question arises: What mathematical properties does this function have? Does it have extreme points, and if so, where are they? Is it convex? What are its maximum and minimum values? | | From Equation 2, we can see that in the transition probability matrix P, EI is a function of each element, representing the conditional probabilities of transitioning from one state to another. Thus, a natural question arises: What mathematical properties does this function have? Does it have extreme points, and if so, where are they? Is it convex? What are its maximum and minimum values? |
− | ===Domain=== | + | ===Domain === |
− | | |
− | | |
| In the case of Markov chains with discrete states and discrete time, the domain of EI is clearly the transition probability matrix P. P is a matrix composed of N×N elements, each representing a probability value pij∈[0,1]. Additionally, each row must satisfy the normalization condition, meaning for any ∀i∈[1,N], the sum of the row's probabilities equals:{{NumBlk|:| | | In the case of Markov chains with discrete states and discrete time, the domain of EI is clearly the transition probability matrix P. P is a matrix composed of N×N elements, each representing a probability value pij∈[0,1]. Additionally, each row must satisfy the normalization condition, meaning for any ∀i∈[1,N], the sum of the row's probabilities equals:{{NumBlk|:| |
| <math> | | <math> |
第470行: |
第468行: |
| EI_{max}=\log N | | EI_{max}=\log N |
| </math> | | </math> |
− | ==最简马尔科夫链下的解析解== | + | ==Analytical Solution of the Simplest Markov Chain== |
− | 我们考虑一个最简单的2*2马尔科夫链矩阵:
| + | We consider the simplest 2x2 Markov chain matrix: |
| | | |
| <math> | | <math> |
第477行: |
第475行: |
| </math> | | </math> |
| | | |
− | 其中 [math]p[/math] 和 [math]q[/math] 为取值 [math][0,1][/math] 的参数。
| + | Here, [math]p[/math] and [math]q[/math] are parameters that take values in the range [math][0,1][/math]. |
| | | |
− | 这个参数为[math]p[/math]和[math]q[/math]的转移概率矩阵的EI可以通过以下解析解计算:
| + | The EI (Effective Information) of this transition probability matrix, which depends on p and q, can be calculated using the following analytical solution: |
| | | |
| <math> | | <math> |
第485行: |
第483行: |
| </math> | | </math> |
| | | |
− | 下图展示了不同[math]p[/math] 和 [math]q[/math]取值的 EI 的变化。
| + | The diagram below shows how EI changes with different values of [math]p[/math] and [math]q[/math]. |
| | | |
| [[文件:EIpq.png|替代=|400x400像素|链接=https://wiki.swarma.org/index.php/%E6%96%87%E4%BB%B6:EIpq.png]] | | [[文件:EIpq.png|替代=|400x400像素|链接=https://wiki.swarma.org/index.php/%E6%96%87%E4%BB%B6:EIpq.png]] |
| | | |
− | 由这张图不难看出,当p+q=1的时候,也就是所有行向量都相同的情形,EI取得最小值0。否则,随着p,q沿着垂直于p+q=1的方向增大,EI开始变大,而最大值为1.
| + | It is clear from the graph that when p+q=1, meaning that all the row vectors are identical, EI reaches its minimum value of 0. Otherwise, as p and q increase along the direction perpendicular to p+q=1, EI increases, with the maximum value being 1. |
− | ==因果涌现== | + | ==Causal Emergence== |
− | 有了有效信息这一度量指标后,我们便可以讨论马尔科夫链上的因果涌现了。对于一个马尔科夫链,观察者可以建立多尺度视角去观测,区分出微观和宏观。首先,原始的马尔科夫概率转移矩阵P即定义了微观动力学;其次,经过对微观态的粗粒化映射(coarse-graining)后(通常反映为对微观状态的分组),观察者可以得到对应的宏观状态(即分组),这些宏观状态彼此之间的概率转移可以用宏观的马尔科夫概率转移矩阵P'来刻画。对两个动力学分别可以计算EI,如果宏观的EI大于微观EI,我们说该系统发生了因果涌现。
| + | With the metric of Effective Information (EI) in place, we can now discuss causal emergence in Markov chains. For a Markov chain, an observer can adopt a multi-scale perspective to distinguish between micro and macro levels. First, the original Markov transition matrix P defines the micro-level dynamics. Second, after a coarse-graining process that maps microstates into macrostates (typically by grouping microstates together), the observer can obtain a macro-level transition matrix P′, which describes the transition probabilities between macrostates. We can compute EI for both dynamics. If the macro-level EI is greater than the micro-level EI, we say that the system exhibits causal emergence. |
| | | |
| [[文件:CE.png|替代=因果涌现示意图|500x500像素|链接=https://wiki.swarma.org/index.php/%E6%96%87%E4%BB%B6:CE.png]] | | [[文件:CE.png|替代=因果涌现示意图|500x500像素|链接=https://wiki.swarma.org/index.php/%E6%96%87%E4%BB%B6:CE.png]] |
| | | |
− | 可以定义一个新的指标直接度量因果涌现的程度:
| + | A new metric can be defined to directly measure the degree of causal emergence: |
| | | |
| <math> | | <math> |
第501行: |
第499行: |
| </math> | | </math> |
| | | |
− | 这里[math]P[/math]为微观状态的马尔科夫概率转移矩阵,维度为:[math]N\times N[/math],这里N为微观的状态数;而[math]P'[/math]为对[math]P[/math]做粗粒化操作之后得到的宏观态的马尔科夫概率转移矩阵,维度为[math]M\times M[/math],其中[math]M<N[/math]为宏观状态数。
| + | Here, P is the microstate Markov transition matrix with dimensions N×N, where N is the number of microstates. P′ is the macro-state transition matrix obtained after coarse-graining, with dimensions M×M, where M<N represents the number of macrostates. |
| | | |
| 关于如何对马尔科夫概率转移矩阵实施粗粒化的方法,往往体现为两步:1、对微观状态做归并,将N个微观态,归并为M个宏观态;2、对马尔科夫转移矩阵做约简。关于具体的粗粒化马尔科夫链的方法,请参考[[马尔科夫链的粗粒化]]。 | | 关于如何对马尔科夫概率转移矩阵实施粗粒化的方法,往往体现为两步:1、对微观状态做归并,将N个微观态,归并为M个宏观态;2、对马尔科夫转移矩阵做约简。关于具体的粗粒化马尔科夫链的方法,请参考[[马尔科夫链的粗粒化]]。 |