第285行: |
第285行: |
| ===Determinism and Degeneracy=== | | ===Determinism and Degeneracy=== |
| In the above definition, the determinism and non-degeneracy terms are negative. To prevent this, we redefine the '''Determinism''' of a Markov chain transition matrix P as: | | In the above definition, the determinism and non-degeneracy terms are negative. To prevent this, we redefine the '''Determinism''' of a Markov chain transition matrix P as: |
− |
| |
− | 然而上述定义中的确定性项和非简并性都是负数,为此,我们重新定义一个马尔科夫链转移矩阵P的'''确定性'''为:
| |
| | | |
| <math> | | <math> |
第473行: |
第471行: |
| </math> | | </math> |
| | | |
− | When <math>P_i</math> is a solitary heat vector without uncertainty, the equal sign of this equation holds. This occurs when Pi is a deterministic one-hot vector. Therefore, when all Pi are one-hot vectors, we have: | + | When <math>P_i</math> is a solitary heat vector without uncertainty, the equal sign of this equation holds. This occurs when <math>P_i</math> is a deterministic one-hot vector. Therefore, when all <math>P_i</math> are one-hot vectors, we have: |
| | | |
− | So, when <math>P_i</math> is the sole heating variable for all i, there is:
| |
| | | |
| <math> | | <math> |
第493行: |
第490行: |
| </math> | | </math> |
| | | |
− | When the equal sign of these two equations holds simultaneously, that is, when <math>P_i</math> is a solitary heat vector and <math>\bar{P}</math> is uniformly distributed (at this time, it is necessary to require [math]P_i[/math] to be perpendicular to each other, that is, P is a permutation matrix) — EI reaches its maximum value of: | + | When the equal sign of these two equations holds simultaneously, that is, when <math>P_i</math> is a solitary heat vector and <math>\bar{P}</math> is uniformly distributed (at this time, it is necessary to require [math]P_i[/math] to be perpendicular to each other, that is, P is a [[Permutation Matrix]]) — EI reaches its maximum value of: |
| | | |
| <math> | | <math> |
| EI_{max}=\log N | | EI_{max}=\log N |
| </math> | | </math> |
| + | |
| ==Analytical Solution of the Simplest Markov Chain== | | ==Analytical Solution of the Simplest Markov Chain== |
− | We consider the simplest 2x2 Markov chain matrix: | + | |
| + | We consider the simplest 2*2 Markov chain matrix: |
| | | |
| <math> | | <math> |
第507行: |
第506行: |
| Here, [math]p[/math] and [math]q[/math] are parameters that take values in the range [math][0,1][/math]. | | Here, [math]p[/math] and [math]q[/math] are parameters that take values in the range [math][0,1][/math]. |
| | | |
− | The EI (Effective Information) of this transition probability matrix, which depends on p and q, can be calculated using the following analytical solution: | + | The EI (Effective Information) of this transition probability matrix, which depends on [math]p[/math] and [math]q[/math], can be calculated using the following analytical solution: |
| | | |
| <math> | | <math> |
第518行: |
第517行: |
| | | |
| It is clear from the graph that when p+q=1, meaning that all the row vectors are identical, EI reaches its minimum value of 0. Otherwise, as p and q increase along the direction perpendicular to p+q=1, EI increases, with the maximum value being 1. | | It is clear from the graph that when p+q=1, meaning that all the row vectors are identical, EI reaches its minimum value of 0. Otherwise, as p and q increase along the direction perpendicular to p+q=1, EI increases, with the maximum value being 1. |
| + | |
| ==Causal Emergence== | | ==Causal Emergence== |
| With the metric of Effective Information (EI) in place, we can now discuss causal emergence in Markov chains. For a Markov chain, an observer can adopt a multi-scale perspective to distinguish between micro and macro levels. First, the original Markov transition matrix P defines the micro-level dynamics. Second, after a coarse-graining process that maps microstates into macrostates (typically by grouping microstates together), the observer can obtain a macro-level transition matrix P′, which describes the transition probabilities between macrostates. We can compute EI for both dynamics. If the macro-level EI is greater than the micro-level EI, we say that the system exhibits causal emergence. | | With the metric of Effective Information (EI) in place, we can now discuss causal emergence in Markov chains. For a Markov chain, an observer can adopt a multi-scale perspective to distinguish between micro and macro levels. First, the original Markov transition matrix P defines the micro-level dynamics. Second, after a coarse-graining process that maps microstates into macrostates (typically by grouping microstates together), the observer can obtain a macro-level transition matrix P′, which describes the transition probabilities between macrostates. We can compute EI for both dynamics. If the macro-level EI is greater than the micro-level EI, we say that the system exhibits causal emergence. |
第529行: |
第529行: |
| </math> | | </math> |
| | | |
− | Here, P is the microstate Markov transition matrix with dimensions N×N, where N is the number of microstates. P′ is the macro-state transition matrix obtained after coarse-graining, with dimensions M×M, where M<N represents the number of macrostates. | + | Here, [math]P[/math] is the microstate Markov transition matrix with dimensions[math]N\times N[/math], where N is the number of microstates. [math]P'[/math] is the macro-state transition matrix obtained after the coarse-graining of [math]P[/math], with dimensions [math]M\times M[/math], where [math]M<N[/math] represents the number of macrostates. |
| | | |
− | The process of coarse-graining a Markov transition matrix typically involves two steps: 1) grouping N microstates into M macrostates, and 2) reducing the Markov transition matrix accordingly. For more details on the specific methods for coarse-graining a Markov chain, refer to the topic of Markov chain coarse-graining. | + | The process of coarse-graining a Markov transition matrix typically involves two steps: 1) grouping N microstates into M macrostates, and 2) reducing the Markov transition matrix accordingly. For more details on the specific methods for coarse-graining a Markov chain, refer to the topic of [[Markov Chain Coarse-graining]]. |
| | | |
− | If the computed CE>0, the system is said to exhibit causal emergence; otherwise, it does not. | + | If the computed CE>0, the system is said to exhibit [[Causal Emergence]]; otherwise, it does not. |
| | | |
| Below, we demonstrate a specific example of causal emergence: | | Below, we demonstrate a specific example of causal emergence: |
− | {| | + | |
− | |+Example of Causal Emergence in a Markov Chain | + | {| style="text-align: center;" |
| + | |+Markov Chain Example |
| |- | | |- |
− | |<math> | + | | |
| + | <math> |
| P_m=\begin{pmatrix} | | P_m=\begin{pmatrix} |
| &1/3 &1/3 &1/3 &0& \\ | | &1/3 &1/3 &1/3 &0& \\ |
第556行: |
第558行: |
| |[math]\begin{aligned}&Det(P_m)=0.81\ bits,\\&Deg(P_m)=0\ bits,\\&EI(P_m)=0.81\ bits\end{aligned}[/math]||[math]\begin{aligned}&Det(P_M)=1\ bits,\\&Deg(P_M)=0\ bits,\\&EI(P_M)=1\ bits\end{aligned}[/math] | | |[math]\begin{aligned}&Det(P_m)=0.81\ bits,\\&Deg(P_m)=0\ bits,\\&EI(P_m)=0.81\ bits\end{aligned}[/math]||[math]\begin{aligned}&Det(P_M)=1\ bits,\\&Deg(P_M)=0\ bits,\\&EI(P_M)=1\ bits\end{aligned}[/math] |
| |} | | |} |
− |
| |
− |
| |
| | | |
| In this example, the microstate transition matrix is a 4x4 matrix, where the first three states transition to each other with a probability of 1/3. This leads to a transition matrix with relatively low determinism, and thus, the EI is not very high, with a value of 0.81. However, when we coarse-grain this matrix—merging the first three states into one macrostate a, and the last state becomes another macrostate b—all transitions between the original three microstates now become internal transitions within macrostate a. Thus, the transition probability matrix becomes PM, with an EI of 1. In this case, the causal emergence can be measured as: | | In this example, the microstate transition matrix is a 4x4 matrix, where the first three states transition to each other with a probability of 1/3. This leads to a transition matrix with relatively low determinism, and thus, the EI is not very high, with a value of 0.81. However, when we coarse-grain this matrix—merging the first three states into one macrostate a, and the last state becomes another macrostate b—all transitions between the original three microstates now become internal transitions within macrostate a. Thus, the transition probability matrix becomes PM, with an EI of 1. In this case, the causal emergence can be measured as: |