更改

跳到导航 跳到搜索
无编辑摘要
第284行: 第284行:     
===Determinism and Degeneracy===
 
===Determinism and Degeneracy===
In the above definition, the determinism and non-degeneracy terms are negative. To prevent this, we redefine the determinism of a Markov chain transition matrix [math]P[/math] as:
+
In the above definition, the determinism and non-degeneracy terms are negative. To prevent this, we redefine the '''Determinism''' of a Markov chain transition matrix P as:
   −
Determinism=−⟨H(Pi​)⟩+logN
+
然而上述定义中的确定性项和非简并性都是负数,为此,我们重新定义一个马尔科夫链转移矩阵P的'''确定性'''为:
    
<math>
 
<math>
第292行: 第292行:  
</math>
 
</math>
   −
This term is an average negative entropy, where the addition of [math]\log N[/math] prevents it from being negative. ''Determinism'' quantifies the certainty in predicting the system's next state given its current state. The reason lies in the fact that when a vector is closer to a uniform distribution, its entropy is larger, and when it is closer to a "one-hot" vector (where one element is 1 and others are 0), its entropy is smaller. The row vectors of the Markov transition matrix indicate the probabilities of transitioning from the current state to various future states. When the average negative entropy of the row vectors is high, it means that one element of the row vector has a probability of 1 while others are 0, indicating that the system will definitely transition to a specific state.
+
This term is an average [[Negative Entropy]], where the addition of [math]\log N[/math]<ref name=hoel_2013 /> prevents it from being negative. Determinism quantifies the certainty in predicting the system's next state given its current state. The reason lies in the fact that when a vector is closer to a uniform distribution, its entropy is larger, and when it is closer to a one-hot vector (where one element is 1 and others are 0), its entropy is smaller. The row vectors of the Markov transition matrix indicate the probabilities of transitioning from the current state to various future states. When the average negative entropy of the row vectors is high, it means that one element of the row vector has a probability of 1 while others are 0, indicating that the system will definitely transition to a specific state.
   −
We also define the degeneracy of a Markov chain transition matrix [math]P[/math] as: Degeneracy=H(Pˉ)+logN
+
We also define the '''Degeneracy''' of a Markov chain transition matrix P as:  
    
<math>
 
<math>
第300行: 第300行:  
</math>
 
</math>
   −
This term measures ''degeneracy'' or ''non-degeneracy''. The more difficult it is to infer the previous state from the current state, the higher the degeneracy of the Markov matrix. Degeneracy can be described using the negative entropy of the average row vector. If the row vectors of [math]P[/math] are linearly independent "one-hot" vectors, the average distribution will approximate a uniform distribution, resulting in maximum Shannon entropy, i.e., [math]\log N[/math]. In this case, the Markov transition matrix is reversible, indicating that we can deduce the previous state from the current state. Therefore, this Markov matrix is non-degenerate, and the computed degeneracy is zero.
+
This term measures degeneracy or non-degeneracy. In order to prevent it from being negative, [math]\log N[/math]<ref name=hoel_2013 /> was added. The meaning of degeneracy is: if the current state of the system is known, can it be deduced from the state of the system at the previous moment? If it can be inferred, then the degeneracy of this Markov matrix will be relatively low, that is, non-degeneracy. If it is difficult to deduce, the Markov matrix is degenerate, i.e., degenerate. Why can degeneracy be described by the negative entropy of the average row vector distribution? First of all, when all the row vectors in P are independent of each other, then their average distribution will be very close to a uniform distribution, i.e [math]\bar{P}\approx (\frac{1}{N},\frac{1}{N},\cdots,\frac{1}{N})[/math], resulting in maximum [[Shannon Entropy]], i.e., [math]\log N[/math]. In this case, the Markov transition matrix is a '''Invertible Matrix''', indicating that we can deduce the previous state from the current state. Therefore, this Markov matrix is non-degenerate, and the computed degeneracy is zero.
   −
Conversely, when all row vectors of [math]P[/math] are identical, the average vector is also a "one-hot" vector with minimum entropy. In this case, it is challenging to infer the previous state from the current state, leading to a degenerate (or non-reversible) Markov matrix, with a computed degeneracy equal to [math]\log N[/math].
+
Conversely, when all row vectors of P are identical, the average vector is also a one-hot vector with minimum [[Entropy]]. In this case, it is challenging to infer the previous state from the current state, leading to a degenerate (or non-reversible) Markov matrix, with a computed degeneracy equal to [math]\log N[/math].
   −
In more general situations, if the row vectors of [math]P[/math] resemble a matrix formed by independent "one-hot" vectors, [math]P[/math] becomes less degenerate. On the other hand, if the row vectors are identical and close to a "one-hot" vector, [math]P[/math] becomes more degenerate.
+
In more general situations, if the row vectors of P resemble a matrix formed by independent one-hot vectors, P becomes less degenerate. On the other hand, if the row vectors are identical and close to a one-hot vector, P becomes more degenerate.
    
===Example===
 
===Example===
1,117

个编辑

导航菜单