更改

添加3,133字节 、 2024年11月3日 (星期日)
第22行: 第22行:     
From these early studies, it can be seen that emergence has a natural and profound connection with causality.
 
From these early studies, it can be seen that emergence has a natural and profound connection with causality.
 +
    
==== Causality and its measurement ====
 
==== Causality and its measurement ====
第27行: 第28行:       −
With the further development of causal science in recent years, people can use a mathematical framework to quantify causality. Causality describes the causal effect of a dynamical process <ref name=":14">Pearl J. Causality[M]. Cambridge university press, 2009.</ref><ref>Granger C W. Investigating causal relations by econometric models and cross-spectral methods[J]. Econometrica: journal of the Econometric Society, 1969, 424-438.</ref><ref name=":8">Pearl J. Models, reasoning and inference[J]. Cambridge, UK: CambridgeUniversityPress, 2000, 19(2).</ref>. Judea Pearl <ref name=":8" /> uses probabilistic graphical models to describe causal interactions. Pearl uses different models to distinguish and quantify three levels of causality. Here we are more concerned with the second level in the causal ladder: intervening in the input distribution. In addition, due to the uncertainty and ambiguity behind the discovered causal relationships, measuring the degree of causal effect between two variables is another important issue. Many independent historical studies have addressed the issue of measuring causal relationships. These measurement methods include Hume's concept of constant connection <ref>Spirtes, P.; Glymour, C.; Scheines, R. Causation Prediction and Search, 2nd ed.; MIT Press: Cambridge, MA, USA, 2000.</ref> and value function-based methods <ref>Chickering, D.M. Learning equivalence classes of Bayesian-network structures. J. Mach. Learn. Res. 2002, 2, 445–498.</ref>, Eells and Suppes' probabilistic causal measures <ref>Eells, E. Probabilistic Causality; Cambridge University Press: Cambridge, UK, 1991; Volume 1</ref><ref>Suppes, P. A probabilistic theory of causality. Br. J. Philos. Sci. 1973, 24, 409–410.</ref>, and Judea Pearl's causal measure indicators, etc. <ref name=":14" />.
+
With the further development of [[causal science]] in recent years, people can use a mathematical framework to quantify causality. [[Causality]] describes the [[causal effect]] of a dynamical process <ref name=":14">Pearl J. Causality[M]. Cambridge university press, 2009.</ref><ref>Granger C W. Investigating causal relations by econometric models and cross-spectral methods[J]. Econometrica: journal of the Econometric Society, 1969, 424-438.</ref><ref name=":8">Pearl J. Models, reasoning and inference[J]. Cambridge, UK: CambridgeUniversityPress, 2000, 19(2).</ref>. Judea Pearl <ref name=":8" /> uses [[probabilistic graphical models]] to describe causal interactions. Pearl uses different models to distinguish and quantify three levels of causality. Here we are more concerned with the second level in the [[causal ladder]]: [[intervening]] in the input distribution. In addition, due to the uncertainty and ambiguity behind the discovered causal relationships, measuring the degree of causal effect between two variables is another important issue. Many independent historical studies have addressed the issue of measuring causal relationships. These measurement methods include [[Hume]]'s [[concept of constant connection]] <ref>Spirtes, P.; Glymour, C.; Scheines, R. Causation Prediction and Search, 2nd ed.; MIT Press: Cambridge, MA, USA, 2000.</ref> and value function-based methods <ref>Chickering, D.M. Learning equivalence classes of Bayesian-network structures. J. Mach. Learn. Res. 2002, 2, 445–498.</ref>, Eells and Suppes' probabilistic causal measures <ref>Eells, E. Probabilistic Causality; Cambridge University Press: Cambridge, UK, 1991; Volume 1</ref><ref>Suppes, P. A probabilistic theory of causality. Br. J. Philos. Sci. 1973, 24, 409–410.</ref>, and Judea Pearl's [[causal measure]] indicators, etc. <ref name=":14" />.
       
==== Causal emergence ====
 
==== Causal emergence ====
As mentioned earlier, emergence and causality are interconnected. Specifically, the connection exists in the following aspects: on the one hand, emergence is the causal effect of complex nonlinear interactions among the components of a complex system; on the other hand, the emergent properties will also have a causal effect on individual elements in complex systems. In addition, in the past, people were accustomed to attributing macroscopic factors to the influence of microscopic factors. However, macroscopic emergent patterns often cannot find microscopic attributions, so corresponding causes cannot be found. Thus, there is a profound connection between emergence and causality. Moreover, although we have a qualitative classification of emergence, we cannot quantitatively characterize the occurrence of emergence. Therefore, we can use causality to quantitatively characterize the occurrence of emergence.
+
As mentioned earlier, emergence and causality are interconnected. Specifically, the connection exists in the following aspects: on the one hand, emergence is the causal effect of [[complex nonlinear interactions]] among the components of a [[Complex Systems|complex system]]; on the other hand, the emergent properties will also have a causal effect on individual elements in complex systems. In addition, in the past, people were accustomed to attributing macroscopic factors to the influence of microscopic factors. However, macroscopic emergent patterns often cannot find microscopic attributions, so corresponding causes cannot be found. Thus, there is a profound connection between emergence and causality. Moreover, although we have a qualitative classification of [[emergence]], we cannot quantitatively characterize the occurrence of emergence. Therefore, we can use causality to quantitatively characterize the occurrence of emergence.
 +
 
    +
In 2013, [[Erik hoel|Erik Hoel]], an American theoretical neurobiologist, tried to introduce causality into the measurement of emergence, proposed the concept of causal emergence, and used [[effective information]] (EI for short) to quantify the strength of causality in system dynamics <ref name=":0" /><ref name=":1" />. '''Causal emergence can be described as: when a system has a stronger causal effect on a macroscopic scale compared to its microscopic scale, causal emergence occurs.''' Causal emergence well characterizes the differences and connections between the macroscopic and microscopic states of a system. At the same time, it combines the two core concepts - causality in [[artificial intelligence]] and emergence in complex systems - together. Causal emergence also provides scholars with a quantitative perspective to answer a series of philosophical questions. For example, the top-down causal characteristics in life systems or social systems can be discussed with the help of the causal emergence framework. The top-down causation here refers to [[downward causation]], indicating the existence of macroscopic-to-microscopic causal effects. For example, in the phenomenon of a gecko breaking its tail. When encountering danger, the gecko directly breaks off its tail regardless of its condition. Here, the whole is the cause and the tail is the effect. Then there is a [[causal force]] from the whole pointing to the part.
   −
In 2013, Erik Hoel, an American theoretical neurobiologist, tried to introduce causality into the measurement of emergence, proposed the concept of causal emergence, and used effective information (EI for short) to quantify the strength of causality in system dynamics <ref name=":0" /><ref name=":1" />. '''Causal emergence can be described as: when a system has a stronger causal effect on a macroscopic scale compared to its microscopic scale, causal emergence occurs.''' Causal emergence well characterizes the differences and connections between the macroscopic and microscopic states of a system. At the same time, it combines the two core concepts - causality in artificial intelligence and emergence in complex systems - together. Causal emergence also provides scholars with a quantitative perspective to answer a series of philosophical questions. For example, the top-down causal characteristics in life systems or social systems can be discussed with the help of the causal emergence framework. The top-down causation here refers to downward causation [26], indicating the existence of macroscopic-to-microscopic causal effects. For example, in the phenomenon of a gecko breaking its tail. When encountering danger, the gecko directly breaks off its tail regardless of its condition. Here, the whole is the cause and the tail is the effect. Then there is a causal force from the whole pointing to the part.
      
=== Early work on quantifying emergence ===
 
=== Early work on quantifying emergence ===
There have been some related works in the early stage that attempted to quantitatively analyze emergence. The computational mechanics theory proposed by Crutchfield et al. <ref name=":3">J. P. Crutchfield, K. Young, Inferring statistical complexity, Physical review letters 63 (2) (1989) 105.</ref> considers causal states. This method discusses related concepts based on the division of state space and is very similar to Erik Hoel's causal emergence theory. On the other hand, Seth et al. proposed the G-emergence theory <ref name=":4">A. K. Seth, Measuring emergence via nonlinear granger causality., in: alife, Vol. 2008, 2008, pp. 545–552.</ref> to quantify emergence by using Granger causality.  
+
There have been some related works in the early stage that attempted to quantitatively analyze emergence. The [[computational mechanics]] theory proposed by Crutchfield et al. <ref name=":3">J. P. Crutchfield, K. Young, Inferring statistical complexity, Physical review letters 63 (2) (1989) 105.</ref> considers [[causal states]]. This method discusses related concepts based on the division of state space and is very similar to Erik Hoel's causal emergence theory. On the other hand, Seth et al. proposed the G-emergence theory <ref name=":4">A. K. Seth, Measuring emergence via nonlinear granger causality., in: alife, Vol. 2008, 2008, pp. 545–552.</ref> to quantify emergence by using [[Granger causality]].
       
==== Computational mechanics ====
 
==== Computational mechanics ====
The computational mechanics theory attempts to express the causal laws of emergence in a quantitative framework, that is, how to construct a coarse-grained causal model from a random process so that this model can generate the time series of the observed random process <ref name=":3" />.
+
The [[computational mechanics]] theory attempts to express the causal laws of emergence in a quantitative framework, that is, how to construct a coarse-grained [[causal model]] from a random process so that this model can generate the time series of the observed random process <ref name=":3" />.
      −
Here, the random process can be represented by <math>\overleftrightarrow{s}</math>. Based on time <math>t</math>, the random process can be divided into two parts: the process before time <math>t</math>, <math>\overleftarrow{s_t}</math>, and the process after time <math>t</math>, <math>\overrightarrow{s_t}</math>. Computational mechanics records the set of all possible historical processes <math>\overleftarrow{s_t}</math> as <math> \overleftarrow{S}</math>, and the set of all future processes <math>\overrightarrow{s_t}</math> as <math> \overrightarrow{S}</math>.
+
Here, the random process can be represented by <math>\overleftrightarrow{s}</math>. Based on time <math>t</math>, the [[random process]] can be divided into two parts: the process before time <math>t</math>, <math>\overleftarrow{s_t}</math>, and the process after time <math>t</math>, <math>\overrightarrow{s_t}</math>.
      第61行: 第63行:       −
Furthermore, in order to achieve the best balance between predictability and simplicity for the set of macroscopic states, computational mechanics defines the concept of causal equivalence. If <math>P\left ( \overrightarrow{s}|\overleftarrow{s}\right )=P\left ( \overrightarrow{s}|{\overleftarrow{s}}'\right )</math>, then <math>\overleftarrow{s}</math> and <math>{\overleftarrow{s}}'</math> are causally equivalent. This equivalence relation can divide all historical processes into equivalence classes and define them as causal states. All causal states of the historical process <math>\overleftarrow{s}</math> can be characterized by a mapping <math>\epsilon \left ( \overleftarrow{s} \right )</math>. Here, <math>\epsilon: \overleftarrow{\mathcal{S}}\rightarrow 2^{\overleftarrow{\mathcal{S}}}</math> is a function that maps the historical process <math>\overleftarrow{s}</math> to the causal state <math>\epsilon(\overleftarrow{s})\in 2^{\overleftarrow{\mathcal{S}}}</math>.
+
Furthermore, in order to achieve the best balance between predictability and simplicity for the set of macroscopic states, computational mechanics defines the concept of [[causal equivalence]]. If <math>P\left ( \overrightarrow{s}|\overleftarrow{s}\right )=P\left ( \overrightarrow{s}|{\overleftarrow{s}}'\right )</math>, then <math>\overleftarrow{s}</math> and <math>{\overleftarrow{s}}'</math> are causally equivalent. This equivalence relation can divide all historical processes into equivalence classes and define them as [[causal states]]. All causal states of the historical process <math>\overleftarrow{s}</math> can be characterized by a mapping <math>\epsilon \left ( \overleftarrow{s} \right )</math>. Here, <math>\epsilon: \overleftarrow{\mathcal{S}}\rightarrow 2^{\overleftarrow{\mathcal{S}}}</math> is a function that maps the historical process <math>\overleftarrow{s}</math> to the causal state <math>\epsilon(\overleftarrow{s})\in 2^{\overleftarrow{\mathcal{S}}}</math>.
      −
Further, we can denote the causal transition probability between two causal states <math>S_i</math> and <math>S_j</math> as <math>T_{ij}^{\left ( s \right )}</math>, which is similar to a coarsened macroscopic dynamics. The <math>\epsilon</math>-machine of a random process is defined as an ordered pair <math>\left \{ \epsilon,T \right \}</math>. This is a pattern discovery machine that can achieve prediction by learning the <math>\epsilon</math> and <math>T</math> functions. This is equivalent to defining the so-called identification problem of emergent causality. Here, the <math>\epsilon</math>-machine is a machine that attempts to discover emergent causality in data.
+
Further, we can denote the causal transition probability between two [[causal states]] <math>S_i</math> and <math>S_j</math> as <math>T_{ij}^{\left ( s \right )}</math>, which is similar to a coarsened macroscopic dynamics. The <math>\epsilon</math>-machine of a random process is defined as an ordered pair <math>\left { \epsilon,T \right }</math>. This is a pattern discovery machine that can achieve prediction by learning the <math>\epsilon</math> and <math>T</math> functions. This is equivalent to defining the so-called identification problem of emergent causality. Here, the <math>\epsilon</math>-machine is a machine that attempts to discover emergent causality in data.
      第73行: 第75行:       −
The causal emergence framework has many similarities with computational mechanics. All historical processes <math>\overleftarrow{s}</math> can be regarded as microscopic states. All <math>R \in \mathcal{R}</math> correspond to macroscopic states. The function <math>\eta</math> can be understood as a possible coarse-graining function. The causal state <math>\epsilon \left ( \overleftarrow{s} \right )</math> is a special state that can at least have the same predictive power as the microscopic state <math>\overleftarrow{s}</math>. Therefore, <math>\epsilon</math> can be understood as an effective coarse-graining strategy. Causal transfer <math>T</math> corresponds to effective macroscopic dynamics. The characteristic of minimum randomness characterizes the determinism of macroscopic dynamics and can be measured by effective information in causal emergence.
+
The causal emergence framework has many similarities with computational mechanics. All historical processes <math>\overleftarrow{s}</math> can be regarded as microscopic states. All <math>R \in \mathcal{R}</math> correspond to macroscopic states. The function <math>\eta</math> can be understood as a possible coarse-graining function. The causal state <math>\epsilon \left ( \overleftarrow{s} \right )</math> is a special state that can at least have the same predictive power as the microscopic state <math>\overleftarrow{s}</math>. Therefore, <math>\epsilon</math> can be understood as an effective [[coarse-graining]] strategy. Causal transfer <math>T</math> corresponds to effective macroscopic dynamics. The characteristic of minimum randomness characterizes the determinism of macroscopic dynamics and can be measured by [[effective information]] in causal emergence.
       
==== G-emergence ====
 
==== G-emergence ====
The G-emergence theory was proposed by Seth in 2008 and is one of the earliest studies to quantitatively quantify emergence from a causal perspective <ref name=":4" />. The basic idea is to use nonlinear Granger causality to quantify weak emergence in complex systems.  
+
The G-emergence theory was proposed by Seth in 2008 and is one of the earliest studies to quantitatively quantify [[emergence]] from a causal perspective <ref name=":4" />. The basic idea is to use nonlinear [[Granger causality|Granger causality]] to quantify [[weak emergence]] in complex systems.
      −
Specifically, if we use a binary autoregressive model for prediction, when there are only two variables A and B, there are two equations in the autoregressive model, each equation corresponds to one of the variables, and the current value of each variable is composed of its own value and the value of the other variable within a certain time lag range. In addition, the model also calculates residuals. Here, residuals can be understood as prediction errors and can be used to measure the degree of Granger causal effect (called G-causality) of each equation. The degree to which B is a Granger cause (G-cause) of A is calculated by taking the logarithm of the ratio of the two residual variances, one being the residual of A's autoregressive model when B is omitted, and the other being the residual of the full prediction model (including A and B). In addition, the author also defines the concept of “G-autonomous”, which represents a measure of the extent to which the past values of a time series can predict its own future values. The strength of this autonomous predictive causal effect can be characterized in a similar way to G-causality.  
+
Specifically, if we use a binary autoregressive model for prediction, when there are only two variables A and B, [[autoregressive model]] has two equations, each equation corresponds to one of the variables, and the current value of each variable is composed of its own value and the value of the other variable within a certain time lag range. In addition, the model also calculates residuals. Here, residuals can be understood as prediction errors and can be used to measure the degree of Granger causal effect (called G-causality) of each equation. The degree to which B is a Granger cause (G-cause) of A is calculated by taking the logarithm of the ratio of the two residual variances, one being the residual of A's autoregressive model when B is omitted, and the other being the residual of the full prediction model (including A and B). In addition, the author also defines the concept of “G-autonomous”, which represents a measure of the extent to which the past values of a time series can predict its own future values. The strength of this autonomous predictive causal effect can be characterized in a similar way to G-causality.
      第86行: 第88行:       −
As shown in the above figure, we can judge the occurrence of emergence based on the two basic concepts in the above G-causality (here is the measure of emergence based on Granger causality, denoted as G-emergence). If A is understood as a macroscopic variable and B is understood as a microscopic variable. The conditions for emergence to occur include two: 1) A is G-autonomous with respect to B; 2) B is a G-cause of A. The degree of G-emergence is calculated by multiplying the degree of A's G-autonomous by the degree of B's average G-cause.  
+
As shown in the above figure, we can judge the occurrence of emergence based on the two basic concepts in the above G-causality (here is the measure of emergence based on [[Granger causality|Granger causality]], denoted as G-emergence). If A is understood as a macroscopic variable and B is understood as a microscopic variable. The conditions for emergence to occur include two: 1) A is G-autonomous with respect to B; 2) B is a G-cause of A. The degree of G-emergence is calculated by multiplying the degree of A's G-autonomous by the degree of B's average G-cause.  
      第96行: 第98行:     
==== Other theories for quantitatively characterizing emergence ====
 
==== Other theories for quantitatively characterizing emergence ====
 +
In addition, there are some other quantitative theories of emergence. There are mainly two methods that are widely discussed. One is to understand [[emergence]] from the process from disorder to order. Moez Mnif and Christian Müller-Schloer <ref>Mnif, M.; Müller-Schloer, C. Quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 39–52. </ref> use [[Shannon entropy]] to measure order and disorder. In the [[self-organization]] process, emergence occurs when order increases. The increase in order is calculated by measuring the difference in Shannon entropy between the initial state and the final state. However, the defect of this method is that it depends on the abstract observation level and the initial conditions of the system. To overcome these two difficulties, the authors propose a measurement method compared with the maximum entropy distribution. Inspired by the work of Moez mif and Christian Müller-Schloer, reference <ref>Fisch, D.; Jänicke, M.; Sick, B.; Müller-Schloer, C. Quantitative emergence–A refined approach based on divergence measures. In Proceedings of the 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Budapest, Hungary, 27 September–1 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 94–103. </ref> suggests using the divergence between two probability distributions to quantify emergence. They understand emergence as an unexpected or unpredictable distribution change based on the observed samples. But this method has disadvantages such as large computational complexity and low estimation accuracy. To solve these problems, reference <ref>Fisch, D.; Fisch, D.; Jänicke, M.; Kalkowski, E.; Sick, B. Techniques for knowledge acquisition in dynamically changing environments. ACM Trans. Auton. Adapt. Syst. (TAAS) 2012, 7, 1–25. [CrossRef] </ref> further proposes an approximate method for estimating density using [[Gaussian mixture models]] and introduces [[Mahalanobis distance]] to characterize the difference between data and Gaussian components, thus obtaining better results. In addition, Holzer, de Meer et al. <ref>Holzer, R.; De Meer, H.; Bettstetter, C. On autonomy and emergence in self-organizing systems. In International Workshop on Self-Organizing Systems, Proceedings of the Third International Workshop, IWSOS 2008, Vienna, Austria, 10–12 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 157–169.</ref><ref>Holzer, R.; de Meer, H. Methods for approximations of quantitative measures in self-organizing systems. In Proceedings of the Self-Organizing Systems: 5th International Workshop, IWSOS 2011, Karlsruhe, Germany, 23–24 February 2011; Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–15.</ref> proposed another emergence measurement method based on Shannon entropy. They believe that a complex system is a self-organizing process in which different individuals interact through communication. Then, we can measure emergence according to the ratio between the Shannon entropy measure of all communications between agents and the sum of Shannon entropies as separate sources.
 +
 
In addition, there are some other quantitative theories of emergence. There are mainly two methods that are widely discussed. One is to understand emergence from the process from disorder to order. Moez Mnif and Christian Müller-Schloer <ref>Mnif, M.; Müller-Schloer, C. Quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 39–52. </ref> use Shannon entropy to measure order and disorder. In the self-organization process, emergence occurs when order increases. The increase in order is calculated by measuring the difference in Shannon entropy between the initial state and the final state. However, the defect of this method is that it depends on the abstract observation level and the initial conditions of the system. To overcome these two difficulties, the authors propose a measurement method compared with the maximum entropy distribution. Inspired by the work of Moez mif and Christian Müller-Schloer, reference <ref>Fisch, D.; Jänicke, M.; Sick, B.; Müller-Schloer, C. Quantitative emergence–A refined approach based on divergence measures. In Proceedings of the 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Budapest, Hungary, 27 September–1 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 94–103. </ref> suggests using the divergence between two probability distributions to quantify emergence. They understand emergence as an unexpected or unpredictable distribution change based on the observed samples. But this method has disadvantages such as large computational complexity and low estimation accuracy. To solve these problems, reference <ref>Fisch, D.; Fisch, D.; Jänicke, M.; Kalkowski, E.; Sick, B. Techniques for knowledge acquisition in dynamically changing environments. ACM Trans. Auton. Adapt. Syst. (TAAS) 2012, 7, 1–25. [CrossRef] </ref> further proposes an approximate method for estimating density using Gaussian mixture models and introduces Mahalanobis distance to characterize the difference between data and Gaussian components, thus obtaining better results. In addition, Holzer, de Meer et al. <ref>Holzer, R.; De Meer, H.; Bettstetter, C. On autonomy and emergence in self-organizing systems. In International Workshop on Self-Organizing Systems, Proceedings of the Third International Workshop, IWSOS 2008, Vienna, Austria, 10–12 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 157–169.</ref><ref>Holzer, R.; de Meer, H. Methods for approximations of quantitative measures in self-organizing systems. In Proceedings of the Self-Organizing Systems: 5th International Workshop, IWSOS 2011, Karlsruhe, Germany, 23–24 February 2011; Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–15.</ref> proposed another emergence measurement method based on Shannon entropy. They believe that a complex system is a self-organizing process in which different individuals interact through communication. Then, we can measure emergence according to the ratio between the Shannon entropy measure of all communications between agents and the sum of Shannon entropies as separate sources.  
 
In addition, there are some other quantitative theories of emergence. There are mainly two methods that are widely discussed. One is to understand emergence from the process from disorder to order. Moez Mnif and Christian Müller-Schloer <ref>Mnif, M.; Müller-Schloer, C. Quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 39–52. </ref> use Shannon entropy to measure order and disorder. In the self-organization process, emergence occurs when order increases. The increase in order is calculated by measuring the difference in Shannon entropy between the initial state and the final state. However, the defect of this method is that it depends on the abstract observation level and the initial conditions of the system. To overcome these two difficulties, the authors propose a measurement method compared with the maximum entropy distribution. Inspired by the work of Moez mif and Christian Müller-Schloer, reference <ref>Fisch, D.; Jänicke, M.; Sick, B.; Müller-Schloer, C. Quantitative emergence–A refined approach based on divergence measures. In Proceedings of the 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Budapest, Hungary, 27 September–1 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 94–103. </ref> suggests using the divergence between two probability distributions to quantify emergence. They understand emergence as an unexpected or unpredictable distribution change based on the observed samples. But this method has disadvantages such as large computational complexity and low estimation accuracy. To solve these problems, reference <ref>Fisch, D.; Fisch, D.; Jänicke, M.; Kalkowski, E.; Sick, B. Techniques for knowledge acquisition in dynamically changing environments. ACM Trans. Auton. Adapt. Syst. (TAAS) 2012, 7, 1–25. [CrossRef] </ref> further proposes an approximate method for estimating density using Gaussian mixture models and introduces Mahalanobis distance to characterize the difference between data and Gaussian components, thus obtaining better results. In addition, Holzer, de Meer et al. <ref>Holzer, R.; De Meer, H.; Bettstetter, C. On autonomy and emergence in self-organizing systems. In International Workshop on Self-Organizing Systems, Proceedings of the Third International Workshop, IWSOS 2008, Vienna, Austria, 10–12 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 157–169.</ref><ref>Holzer, R.; de Meer, H. Methods for approximations of quantitative measures in self-organizing systems. In Proceedings of the Self-Organizing Systems: 5th International Workshop, IWSOS 2011, Karlsruhe, Germany, 23–24 February 2011; Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–15.</ref> proposed another emergence measurement method based on Shannon entropy. They believe that a complex system is a self-organizing process in which different individuals interact through communication. Then, we can measure emergence according to the ratio between the Shannon entropy measure of all communications between agents and the sum of Shannon entropies as separate sources.  
   第115行: 第119行:     
In 2024, Zhang Jiang et al. <ref name=":2">Zhang J, Tao R, Yuan B. Dynamical Reversibility and A New Theory of Causal Emergence. arXiv preprint arXiv:2402.15054. 2024 Feb 23.</ref> proposed a new causal emergence theory based on singular value decomposition. The core idea of this theory is to point out that the so-called causal emergence is actually equivalent to the emergence of dynamical reversibility. Given the Markov transition matrix of a system, by performing singular value decomposition on it, the sum of the <math>\alpha</math> power of the singular values is defined as the reversibility measure of Markov dynamics (<math>\Gamma_{\alpha}\equiv \sum_{i=1}^N\sigma_i^{\alpha}</math>), where [math]\sigma_i[/math] is the singular value. This index is highly correlated with effective information and can also be used to characterize the causal effect strength of dynamics. According to the spectrum of singular values, this method can directly define the concepts of '''clear emergence''' and '''vague emergence''' without explicitly defining a coarse-graining scheme.
 
In 2024, Zhang Jiang et al. <ref name=":2">Zhang J, Tao R, Yuan B. Dynamical Reversibility and A New Theory of Causal Emergence. arXiv preprint arXiv:2402.15054. 2024 Feb 23.</ref> proposed a new causal emergence theory based on singular value decomposition. The core idea of this theory is to point out that the so-called causal emergence is actually equivalent to the emergence of dynamical reversibility. Given the Markov transition matrix of a system, by performing singular value decomposition on it, the sum of the <math>\alpha</math> power of the singular values is defined as the reversibility measure of Markov dynamics (<math>\Gamma_{\alpha}\equiv \sum_{i=1}^N\sigma_i^{\alpha}</math>), where [math]\sigma_i[/math] is the singular value. This index is highly correlated with effective information and can also be used to characterize the causal effect strength of dynamics. According to the spectrum of singular values, this method can directly define the concepts of '''clear emergence''' and '''vague emergence''' without explicitly defining a coarse-graining scheme.
      
== Quantification of causal emergence ==
 
== Quantification of causal emergence ==
150

个编辑