第132行: |
第132行: |
| If the distribution is not uniform, some rows will be assigned a larger weight, while others will be given a smaller weight. This weight represents a certain "bias," which prevents the EI from reflecting the natural properties of the causal mechanism. | | If the distribution is not uniform, some rows will be assigned a larger weight, while others will be given a smaller weight. This weight represents a certain "bias," which prevents the EI from reflecting the natural properties of the causal mechanism. |
| | | |
− | =马尔科夫链的有效信息= | + | =Effective Information of Markov Chains= |
− | ==马尔科夫链简介== | + | ==Introduction to Markov Chains== |
− | <blockquote>在本小节中,所有的马尔科夫转移概率矩阵都表示为[math]P[/math]。N为总的状态数量。</blockquote>最早,[[Erik Hoel]]等人是在离散状态的[[马尔科夫动力学]],即[[马尔科夫链]]上提出有效信息这一度量因果性的指标的。因此,这一节中,我们介绍有效信息在[[马尔科夫链]]上的特殊形式。
| + | In this section, all Markov transition probability matrices are denoted as P. N is the total number of states. |
| | | |
− | 所谓的[[马尔科夫链]]是指状态离散、时间离散的一种[[平稳随机过程]],它的动力学一般都可以用所谓的[[转移概率矩阵]](Transitional Probability Matrix),简称TPM来表示,有时也叫做[[概率转移矩阵]]或[[状态概率转移矩阵]]或[[状态转移矩阵]]。
| + | Erik Hoel and colleagues first proposed the measure of causality, Effective Information (EI), on the Markov dynamics with discrete states, also known as Markov chains. Therefore, this section introduces the specific form of EI on Markov chains. |
| | | |
− | 具体来讲,[[马尔科夫链]]包含一组随机变量[math]X_t[/math],它在状态空间[math]\mathcal{X}=\{1,2,\cdots,N\}[/math]上取值,其中[math]t[/math]往往表示时间。所谓的[[转移概率矩阵]]是指一个概率矩阵,其中第[math]i[/math]行,第[math]j[/math]列元素:[math]p_{ij}[/math]表示了系统在任意时刻[math]t[/math]在[math]i[/math]状态的条件下,在[math]t+1[/math]时刻跳转到[math]j[/math]状态的概率。同时,每一行满足归一化条件:
| + | A Markov chain refers to a type of stationary stochastic process with discrete states and discrete time. Its dynamics can generally be represented by a Transitional Probability Matrix (TPM), also known as a probability transition matrix or state transition matrix. |
| + | |
| + | Specifically, a Markov chain consists of a set of random variables Xt, which take values in the state space X={1,2,⋯,N}, where t typically represents time. A transition probability matrix is a probability matrix, where the element in the i-th row and j-th column, pij, represents the probability of the system transitioning from state i at any time t to state j at time t+1. Each row satisfies the normalization condition: ∑j=1Npij=1. |
| | | |
| <math> | | <math> |
第144行: |
第146行: |
| </math> | | </math> |
| | | |
− | [[状态转移矩阵]]可以看作是[[马尔科夫链]]的[[动力学]],这是因为,任意时刻[math]t+1[/math]上的状态概率分布,即[math]Pr(X_t)[/math],可以被上一时刻的状态概率分布,即[math]Pr(X_t)[/math]和[[状态转移矩阵]]所唯一确定,并满足关系:
| + | The state transition matrix can be viewed as the dynamics of the Markov chain because the probability distribution of the state at any time t+1, Pr(Xt), can be uniquely determined by the probability distribution of the state at the previous time t and the state transition matrix, satisfying the relationship: Pr(Xt+1=j)=∑i=1Npij⋅Pr(Xt=i), where i,j∈X are arbitrary states in X, and N=#(X), the total number of states in X. |
| | | |
| <math> | | <math> |
第150行: |
第152行: |
| </math> | | </math> |
| | | |
− | 这里的[math]i,j\in \mathcal{X}[/math]都是[math]\mathcal{X}[/math]中的任意状态,且[math]N=\#(\mathcal{X})[/math]即[math]\mathcal{X}[/math]中的总状态数。
| + | The following table presents three different transition probability matrices and their EI values: |
| | | |
− | 下表展示的是三个不同的转移概率矩阵,以及它们的EI数值:
| |
| {| | | {| |
− | |+马尔科夫链示例 | + | |+Example of Markov Chains |
| |- | | |- |
| |<math> | | |<math> |
第184行: |
第185行: |
| |} | | |} |
| | | |
− | | + | For these three Markov chains, the state space is X={1,2,3,4}, so the size of their TPM is 4×4. |
− | 这三个[[马尔科夫链]]的状态空间都是[math]\mathcal{X}=\{1,2,3,4\}[/math],因此它们的TPM的大小都是[math]4\times 4[/math]。
| + | ==EI of Markov Chains== |
− | ==马尔科夫链的EI== | |
| 在[[马尔科夫链]]中,任意时刻的状态变量[math]X_t[/math]都可以看作是原因,而下一时刻的状态变量[math]X_{t+1}[/math]就可以看作是结果,这样[[马尔科夫链]]的[[状态转移矩阵]]就是它的[[因果机制]]。因此,我们可以将有效信息的定义套用到[[马尔科夫链]]上来。 | | 在[[马尔科夫链]]中,任意时刻的状态变量[math]X_t[/math]都可以看作是原因,而下一时刻的状态变量[math]X_{t+1}[/math]就可以看作是结果,这样[[马尔科夫链]]的[[状态转移矩阵]]就是它的[[因果机制]]。因此,我们可以将有效信息的定义套用到[[马尔科夫链]]上来。 |
| | | |