更改

添加35字节 、 2024年11月12日 (星期二)
第743行: 第743行:     
==Related research fields==
 
==Related research fields==
There are some related research fields that are closely related to causal emergence theory. Here we focus on introducing the differences and connections with three related fields: reduction of dynamical models, dynamic mode decomposition, and simplification of Markov chains.
+
There are some related research fields that are closely related to causal emergence theory. Here we focus on introducing the differences and connections with three related fields: [[reduction of dynamical models]], [[dynamic mode decomposition]], and [[simplification of Markov chains]].
      第753行: 第753行:       −
In general, the error loss function of the output function before and after model reduction can be used to judge the coarse-graining parameters. This process defaults that the system reduction process will lose information. Therefore, minimizing the error is the only way to judge the effectiveness of the reduction method. However, from the perspective of causal emergence, effective information will increase due to dimensionality reduction. This is also the biggest difference between the coarse-graining strategy in causal emergence research and model reduction in control theory. When the dynamical system is a stochastic system <ref name=":17" />, directly calculating the loss function will lead to unstable due to the existence of randomness, so the effectiveness of reduction cannot be accurately measured. The effective information and causal emergence index based on stochastic dynamical systems can increase the effectiveness of evaluation indicators to a certain extent and make the control research of stochastic dynamical systems more rigorous.
+
In general, the error loss function of the output function before and after model reduction can be used to judge the coarse-graining parameters. This process defaults that the system reduction process will lose information. Therefore, minimizing the error is the only way to judge the effectiveness of the reduction method. However, from the perspective of causal emergence, [[effective information]] will increase due to dimensionality reduction. This is also the biggest difference between the coarse-graining strategy in causal emergence research and model reduction in control theory. When the dynamical system is a stochastic system <ref name=":17" />, directly calculating the loss function will lead to unstable due to the existence of randomness, so the effectiveness of reduction cannot be accurately measured. The effective information and causal emergence index based on stochastic dynamical systems can increase the effectiveness of evaluation indicators to a certain extent and make the control research of stochastic dynamical systems more rigorous.
      第764行: 第764行:     
===Simplification of Markov chains===
 
===Simplification of Markov chains===
The simplification of Markov chains (or called coarse-graining of Markov chains) is also importantly related to causal emergence. The coarse-graining process in causal emergence is essentially the simplification of Markov chains. Model simplification of Markov processes <ref>Zhang A, Wang M. Spectral state compression of markov processes[J]. IEEE transactions on information theory, 2019, 66(5): 3202-3231.</ref> is an important problem in state transition system modeling. It reduces the complexity of Markov chains by merging multiple states into one state.
+
The [[simplification of Markov chains]] (or called [[coarse-graining of Markov chains]]) is also importantly related to causal emergence. The coarse-graining process in causal emergence is essentially the simplification of Markov chains. Model simplification of Markov processes <ref>Zhang A, Wang M. Spectral state compression of markov processes[J]. IEEE transactions on information theory, 2019, 66(5): 3202-3231.</ref> is an important problem in state transition system modeling. It reduces the complexity of Markov chains by merging multiple states into one state.
      第770行: 第770行:       −
Among them, there are two types of coarse-graining of the state space: hard partitioning and soft partitioning. Soft partitioning can be regarded as a process of breaking up the microscopic state and reconstructing some macroscopic states, and allowing the superposition of microscopic states to obtain macroscopic states. Hard partitioning is a strict grouping of microscopic states, dividing several microscopic states into one group without allowing overlap and superposition (see coarse-graining of Markov chains).
+
Among them, there are two types of coarse-graining of the state space: hard partitioning and soft partitioning. Soft partitioning can be regarded as a process of breaking up the microscopic state and reconstructing some macroscopic states, and allowing the superposition of microscopic states to obtain macroscopic states. Hard partitioning is a strict grouping of microscopic states, dividing several microscopic states into one group without allowing overlap and superposition (see [[coarse-graining of Markov chains]]).
      第776行: 第776行:       −
In addition to these basic guarantees, we usually also require that the coarse-graining operation of the transition matrix should be commutative with the transition matrix. This condition can ensure that the one-step evolution of the state vector after coarse-graining through the coarse-grained transition matrix (equivalent to macroscopic dynamics) is equivalent to first performing one-step transition matrix evolution on the state vector (equivalent to microscopic dynamics) and then performing coarse-graining. This puts forward requirements for both the state grouping (the coarse-graining process of the state) and the coarse-graining process of the transition matrix. This requirement of commutativity leads people to propose the requirement of clustering property of Markov chains.
+
In addition to these basic guarantees, we usually also require that the coarse-graining operation of the transition matrix should be commutative with the transition matrix. This condition can ensure that the one-step evolution of the state vector after coarse-graining through the coarse-grained transition matrix (equivalent to macroscopic dynamics) is equivalent to first performing one-step transition matrix evolution on the state vector (equivalent to microscopic dynamics) and then performing coarse-graining. This puts forward requirements for both the state grouping (the coarse-graining process of the state) and the coarse-graining process of the transition matrix. This requirement of commutativity leads people to propose the requirement of [[clustering property of Markov chains]].
      第794行: 第794行:       −
For specific methods of coarse-graining Markov chains, please refer to coarse-graining of Markov chains.
+
For specific methods of coarse-graining Markov chains, please refer to [[coarse-graining of Markov chains]].
 
      
==References==
 
==References==
150

个编辑