更改

跳到导航 跳到搜索
添加11字节 、 2024年11月12日 (星期二)
无编辑摘要
第76行: 第76行:     
The causal emergence framework has many similarities with computational mechanics. All historical processes <math>\overleftarrow{s}</math> can be regarded as microscopic states. All <math>R \in \mathcal{R}</math> correspond to macroscopic states. The function <math>\eta</math> can be understood as a possible coarse-graining function. The causal state <math>\epsilon \left ( \overleftarrow{s} \right )</math> is a special state that can at least have the same predictive power as the microscopic state <math>\overleftarrow{s}</math>. Therefore, <math>\epsilon</math> can be understood as an effective [[coarse-graining]] strategy. Causal transfer <math>T</math> corresponds to effective macroscopic dynamics. The characteristic of minimum randomness characterizes the determinism of macroscopic dynamics and can be measured by [[effective information]] in causal emergence.
 
The causal emergence framework has many similarities with computational mechanics. All historical processes <math>\overleftarrow{s}</math> can be regarded as microscopic states. All <math>R \in \mathcal{R}</math> correspond to macroscopic states. The function <math>\eta</math> can be understood as a possible coarse-graining function. The causal state <math>\epsilon \left ( \overleftarrow{s} \right )</math> is a special state that can at least have the same predictive power as the microscopic state <math>\overleftarrow{s}</math>. Therefore, <math>\epsilon</math> can be understood as an effective [[coarse-graining]] strategy. Causal transfer <math>T</math> corresponds to effective macroscopic dynamics. The characteristic of minimum randomness characterizes the determinism of macroscopic dynamics and can be measured by [[effective information]] in causal emergence.
 +
    
==== G-emergence ====
 
==== G-emergence ====
第98行: 第99行:  
==== Other theories for quantitatively characterizing emergence ====
 
==== Other theories for quantitatively characterizing emergence ====
 
In addition, there are some other quantitative theories of emergence. There are mainly two methods that are widely discussed. One is to understand [[emergence]] from the process from disorder to order. Moez Mnif and Christian Müller-Schloer <ref>Mnif, M.; Müller-Schloer, C. Quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 39–52. </ref> use [[Shannon entropy]] to measure order and disorder. In the [[self-organization]] process, emergence occurs when order increases. The increase in order is calculated by measuring the difference in Shannon entropy between the initial state and the final state. However, the defect of this method is that it depends on the abstract observation level and the initial conditions of the system. To overcome these two difficulties, the authors propose a measurement method compared with the maximum entropy distribution. Inspired by the work of Moez mif and Christian Müller-Schloer, reference <ref>Fisch, D.; Jänicke, M.; Sick, B.; Müller-Schloer, C. Quantitative emergence–A refined approach based on divergence measures. In Proceedings of the 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Budapest, Hungary, 27 September–1 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 94–103. </ref> suggests using the divergence between two probability distributions to quantify emergence. They understand emergence as an unexpected or unpredictable distribution change based on the observed samples. But this method has disadvantages such as large computational complexity and low estimation accuracy. To solve these problems, reference <ref>Fisch, D.; Fisch, D.; Jänicke, M.; Kalkowski, E.; Sick, B. Techniques for knowledge acquisition in dynamically changing environments. ACM Trans. Auton. Adapt. Syst. (TAAS) 2012, 7, 1–25. [CrossRef] </ref> further proposes an approximate method for estimating density using [[Gaussian mixture models]] and introduces [[Mahalanobis distance]] to characterize the difference between data and Gaussian components, thus obtaining better results. In addition, Holzer, de Meer et al. <ref>Holzer, R.; De Meer, H.; Bettstetter, C. On autonomy and emergence in self-organizing systems. In International Workshop on Self-Organizing Systems, Proceedings of the Third International Workshop, IWSOS 2008, Vienna, Austria, 10–12 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 157–169.</ref><ref>Holzer, R.; de Meer, H. Methods for approximations of quantitative measures in self-organizing systems. In Proceedings of the Self-Organizing Systems: 5th International Workshop, IWSOS 2011, Karlsruhe, Germany, 23–24 February 2011; Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–15.</ref> proposed another emergence measurement method based on Shannon entropy. They believe that a complex system is a self-organizing process in which different individuals interact through communication. Then, we can measure emergence according to the ratio between the Shannon entropy measure of all communications between agents and the sum of Shannon entropies as separate sources.
 
In addition, there are some other quantitative theories of emergence. There are mainly two methods that are widely discussed. One is to understand [[emergence]] from the process from disorder to order. Moez Mnif and Christian Müller-Schloer <ref>Mnif, M.; Müller-Schloer, C. Quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 39–52. </ref> use [[Shannon entropy]] to measure order and disorder. In the [[self-organization]] process, emergence occurs when order increases. The increase in order is calculated by measuring the difference in Shannon entropy between the initial state and the final state. However, the defect of this method is that it depends on the abstract observation level and the initial conditions of the system. To overcome these two difficulties, the authors propose a measurement method compared with the maximum entropy distribution. Inspired by the work of Moez mif and Christian Müller-Schloer, reference <ref>Fisch, D.; Jänicke, M.; Sick, B.; Müller-Schloer, C. Quantitative emergence–A refined approach based on divergence measures. In Proceedings of the 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Budapest, Hungary, 27 September–1 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 94–103. </ref> suggests using the divergence between two probability distributions to quantify emergence. They understand emergence as an unexpected or unpredictable distribution change based on the observed samples. But this method has disadvantages such as large computational complexity and low estimation accuracy. To solve these problems, reference <ref>Fisch, D.; Fisch, D.; Jänicke, M.; Kalkowski, E.; Sick, B. Techniques for knowledge acquisition in dynamically changing environments. ACM Trans. Auton. Adapt. Syst. (TAAS) 2012, 7, 1–25. [CrossRef] </ref> further proposes an approximate method for estimating density using [[Gaussian mixture models]] and introduces [[Mahalanobis distance]] to characterize the difference between data and Gaussian components, thus obtaining better results. In addition, Holzer, de Meer et al. <ref>Holzer, R.; De Meer, H.; Bettstetter, C. On autonomy and emergence in self-organizing systems. In International Workshop on Self-Organizing Systems, Proceedings of the Third International Workshop, IWSOS 2008, Vienna, Austria, 10–12 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 157–169.</ref><ref>Holzer, R.; de Meer, H. Methods for approximations of quantitative measures in self-organizing systems. In Proceedings of the Self-Organizing Systems: 5th International Workshop, IWSOS 2011, Karlsruhe, Germany, 23–24 February 2011; Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–15.</ref> proposed another emergence measurement method based on Shannon entropy. They believe that a complex system is a self-organizing process in which different individuals interact through communication. Then, we can measure emergence according to the ratio between the Shannon entropy measure of all communications between agents and the sum of Shannon entropies as separate sources.
 +
    
In addition, there are some other quantitative theories of emergence. There are mainly two methods that are widely discussed. One is to understand emergence from the process from disorder to order. Moez Mnif and Christian Müller-Schloer <ref>Mnif, M.; Müller-Schloer, C. Quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 39–52. </ref> use Shannon entropy to measure order and disorder. In the self-organization process, emergence occurs when order increases. The increase in order is calculated by measuring the difference in Shannon entropy between the initial state and the final state. However, the defect of this method is that it depends on the abstract observation level and the initial conditions of the system. To overcome these two difficulties, the authors propose a measurement method compared with the maximum entropy distribution. Inspired by the work of Moez mif and Christian Müller-Schloer, reference <ref>Fisch, D.; Jänicke, M.; Sick, B.; Müller-Schloer, C. Quantitative emergence–A refined approach based on divergence measures. In Proceedings of the 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Budapest, Hungary, 27 September–1 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 94–103. </ref> suggests using the divergence between two probability distributions to quantify emergence. They understand emergence as an unexpected or unpredictable distribution change based on the observed samples. But this method has disadvantages such as large computational complexity and low estimation accuracy. To solve these problems, reference <ref>Fisch, D.; Fisch, D.; Jänicke, M.; Kalkowski, E.; Sick, B. Techniques for knowledge acquisition in dynamically changing environments. ACM Trans. Auton. Adapt. Syst. (TAAS) 2012, 7, 1–25. [CrossRef] </ref> further proposes an approximate method for estimating density using Gaussian mixture models and introduces Mahalanobis distance to characterize the difference between data and Gaussian components, thus obtaining better results. In addition, Holzer, de Meer et al. <ref>Holzer, R.; De Meer, H.; Bettstetter, C. On autonomy and emergence in self-organizing systems. In International Workshop on Self-Organizing Systems, Proceedings of the Third International Workshop, IWSOS 2008, Vienna, Austria, 10–12 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 157–169.</ref><ref>Holzer, R.; de Meer, H. Methods for approximations of quantitative measures in self-organizing systems. In Proceedings of the Self-Organizing Systems: 5th International Workshop, IWSOS 2011, Karlsruhe, Germany, 23–24 February 2011; Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–15.</ref> proposed another emergence measurement method based on Shannon entropy. They believe that a complex system is a self-organizing process in which different individuals interact through communication. Then, we can measure emergence according to the ratio between the Shannon entropy measure of all communications between agents and the sum of Shannon entropies as separate sources.  
 
In addition, there are some other quantitative theories of emergence. There are mainly two methods that are widely discussed. One is to understand emergence from the process from disorder to order. Moez Mnif and Christian Müller-Schloer <ref>Mnif, M.; Müller-Schloer, C. Quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 39–52. </ref> use Shannon entropy to measure order and disorder. In the self-organization process, emergence occurs when order increases. The increase in order is calculated by measuring the difference in Shannon entropy between the initial state and the final state. However, the defect of this method is that it depends on the abstract observation level and the initial conditions of the system. To overcome these two difficulties, the authors propose a measurement method compared with the maximum entropy distribution. Inspired by the work of Moez mif and Christian Müller-Schloer, reference <ref>Fisch, D.; Jänicke, M.; Sick, B.; Müller-Schloer, C. Quantitative emergence–A refined approach based on divergence measures. In Proceedings of the 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Budapest, Hungary, 27 September–1 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 94–103. </ref> suggests using the divergence between two probability distributions to quantify emergence. They understand emergence as an unexpected or unpredictable distribution change based on the observed samples. But this method has disadvantages such as large computational complexity and low estimation accuracy. To solve these problems, reference <ref>Fisch, D.; Fisch, D.; Jänicke, M.; Kalkowski, E.; Sick, B. Techniques for knowledge acquisition in dynamically changing environments. ACM Trans. Auton. Adapt. Syst. (TAAS) 2012, 7, 1–25. [CrossRef] </ref> further proposes an approximate method for estimating density using Gaussian mixture models and introduces Mahalanobis distance to characterize the difference between data and Gaussian components, thus obtaining better results. In addition, Holzer, de Meer et al. <ref>Holzer, R.; De Meer, H.; Bettstetter, C. On autonomy and emergence in self-organizing systems. In International Workshop on Self-Organizing Systems, Proceedings of the Third International Workshop, IWSOS 2008, Vienna, Austria, 10–12 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 157–169.</ref><ref>Holzer, R.; de Meer, H. Methods for approximations of quantitative measures in self-organizing systems. In Proceedings of the Self-Organizing Systems: 5th International Workshop, IWSOS 2011, Karlsruhe, Germany, 23–24 February 2011; Proceedings 5; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–15.</ref> proposed another emergence measurement method based on Shannon entropy. They believe that a complex system is a self-organizing process in which different individuals interact through communication. Then, we can measure emergence according to the ratio between the Shannon entropy measure of all communications between agents and the sum of Shannon entropies as separate sources.  
第107行: 第109行:  
=== Causal emergence theory based on effective information ===
 
=== Causal emergence theory based on effective information ===
 
In history, the first relatively complete and explicit quantitative theory that uses causality to define emergence is the causal emergence theory proposed by [[Erik Hoel]], [[Larissa Albantakis]] and [[Giulio Tononi]] <ref name=":0" /><ref name=":1" />. This theory defines so-called causal emergence for [[Markov chains]] as the phenomenon that the coarsened Markov chain has a greater causal effect strength than the original Markov chain. Here, the causal effect strength is measured by [[effective information]]. This indicator is a modification of the [[mutual information]] indicator. The main difference is that the state variable at time <math>t</math> is intervened by [[do-intervention]] and transformed into a [[uniform distribution]] (or [[maximum entropy distribution]]). The [[effective information]] indicator was proposed by [[Giulio Tononi]] as early as 2003 when studying [[integrated information theory]]. As [[Giulio Tononi]]'s student, [[Erik Hoel]] applied effective information to Markov chains and proposed the causal emergence theory based on effective information.
 
In history, the first relatively complete and explicit quantitative theory that uses causality to define emergence is the causal emergence theory proposed by [[Erik Hoel]], [[Larissa Albantakis]] and [[Giulio Tononi]] <ref name=":0" /><ref name=":1" />. This theory defines so-called causal emergence for [[Markov chains]] as the phenomenon that the coarsened Markov chain has a greater causal effect strength than the original Markov chain. Here, the causal effect strength is measured by [[effective information]]. This indicator is a modification of the [[mutual information]] indicator. The main difference is that the state variable at time <math>t</math> is intervened by [[do-intervention]] and transformed into a [[uniform distribution]] (or [[maximum entropy distribution]]). The [[effective information]] indicator was proposed by [[Giulio Tononi]] as early as 2003 when studying [[integrated information theory]]. As [[Giulio Tononi]]'s student, [[Erik Hoel]] applied effective information to Markov chains and proposed the causal emergence theory based on effective information.
 +
    
=== Causal Emergence Theory Based on Information Decomposition ===
 
=== Causal Emergence Theory Based on Information Decomposition ===
 
In addition, in 2020, Rosas et al. <ref name=":5" /> proposed a method based on [[information decomposition]] to define causal emergence in systems from an [[information theory|information theory]] perspective and quantitatively characterize emergence based on [[synergistic information]] or [[redundant information]]. The so-called [[information decomposition]] is a new method to analyze the complex interrelationships of various variables in [[complex systems]]. By decomposing information, each partial information is represented by an information atom. At the same time, each partial information is projected into the [[information atom]] with the help of an [[information lattice diagram]]. Both synergistic information and redundant information can be represented by corresponding information atoms. This method is based on the [[non-negative decomposition theory of multivariate information]] proposed by Williams and Beer <ref name=":16">Williams P L, Beer R D. Nonnegative decomposition of multivariate information[J]. arXiv preprint arXiv:10042515, 2010.</ref>. In the paper, [[partial information decomposition]] (PID) is used to decompose the mutual information between microstates and macrostates. However, the PID framework can only decompose the mutual information between multiple source variables and one target variable. Rosas extended this framework and proposed the integrated information decomposition method <math>\Phi ID</math> <ref name=":18">P. A. Mediano, F. Rosas, R. L. Carhart-Harris, A. K. Seth, A. B. Barrett, Beyond integrated information: A taxonomy of information dynamics phenomena, arXiv preprint arXiv:1909.02297 (2019).</ref>.
 
In addition, in 2020, Rosas et al. <ref name=":5" /> proposed a method based on [[information decomposition]] to define causal emergence in systems from an [[information theory|information theory]] perspective and quantitatively characterize emergence based on [[synergistic information]] or [[redundant information]]. The so-called [[information decomposition]] is a new method to analyze the complex interrelationships of various variables in [[complex systems]]. By decomposing information, each partial information is represented by an information atom. At the same time, each partial information is projected into the [[information atom]] with the help of an [[information lattice diagram]]. Both synergistic information and redundant information can be represented by corresponding information atoms. This method is based on the [[non-negative decomposition theory of multivariate information]] proposed by Williams and Beer <ref name=":16">Williams P L, Beer R D. Nonnegative decomposition of multivariate information[J]. arXiv preprint arXiv:10042515, 2010.</ref>. In the paper, [[partial information decomposition]] (PID) is used to decompose the mutual information between microstates and macrostates. However, the PID framework can only decompose the mutual information between multiple source variables and one target variable. Rosas extended this framework and proposed the integrated information decomposition method <math>\Phi ID</math> <ref name=":18">P. A. Mediano, F. Rosas, R. L. Carhart-Harris, A. K. Seth, A. B. Barrett, Beyond integrated information: A taxonomy of information dynamics phenomena, arXiv preprint arXiv:1909.02297 (2019).</ref>.
 +
    
=== Recent Work ===
 
=== Recent Work ===
第116行: 第120行:     
In 2024, [[Zhang Jiang]] et al. <ref name=":2">Zhang J, Tao R, Yuan B. Dynamical Reversibility and A New Theory of Causal Emergence. arXiv preprint arXiv:2402.15054. 2024 Feb 23.</ref> proposed a new causal emergence theory based on [[singular value decomposition]]. The core idea of this theory is to point out that the so-called causal emergence is actually equivalent to the emergence of dynamical reversibility. Given the Markov transition matrix of a system, by performing singular value decomposition on it, the sum of the <math>\alpha</math> power of the singular values is defined as the [[reversibility measure]] of Markov dynamics (<math>\Gamma_{\alpha}\equiv \sum_{i=1}^N\sigma_i^{\alpha}</math>), where [math]\sigma_i[/math] is the singular value. This index is highly correlated with [[effective information]] and can also be used to characterize the causal effect strength of dynamics. According to the spectrum of singular values, this method can directly define the concepts of '''clear emergence''' and '''vague emergence''' without explicitly defining a coarse-graining scheme.
 
In 2024, [[Zhang Jiang]] et al. <ref name=":2">Zhang J, Tao R, Yuan B. Dynamical Reversibility and A New Theory of Causal Emergence. arXiv preprint arXiv:2402.15054. 2024 Feb 23.</ref> proposed a new causal emergence theory based on [[singular value decomposition]]. The core idea of this theory is to point out that the so-called causal emergence is actually equivalent to the emergence of dynamical reversibility. Given the Markov transition matrix of a system, by performing singular value decomposition on it, the sum of the <math>\alpha</math> power of the singular values is defined as the [[reversibility measure]] of Markov dynamics (<math>\Gamma_{\alpha}\equiv \sum_{i=1}^N\sigma_i^{\alpha}</math>), where [math]\sigma_i[/math] is the singular value. This index is highly correlated with [[effective information]] and can also be used to characterize the causal effect strength of dynamics. According to the spectrum of singular values, this method can directly define the concepts of '''clear emergence''' and '''vague emergence''' without explicitly defining a coarse-graining scheme.
 +
    
== Quantification of causal emergence ==
 
== Quantification of causal emergence ==
第154行: 第159行:     
Effective information can be decomposed into two parts: '''determinism''' and '''degeneracy'''. For more detailed information about Effective Information, please refer to the entry: [[Effective Information]].
 
Effective information can be decomposed into two parts: '''determinism''' and '''degeneracy'''. For more detailed information about Effective Information, please refer to the entry: [[Effective Information]].
 +
    
=====Causal Emergence Measurement=====
 
=====Causal Emergence Measurement=====
第342行: 第348行:  
|Dynamic independence <ref name=":6"/>||Granger causality||Requires specifying a coarse-graining method||Arbitrary dynamics||Dynamic independence: transfer entropy
 
|Dynamic independence <ref name=":6"/>||Granger causality||Requires specifying a coarse-graining method||Arbitrary dynamics||Dynamic independence: transfer entropy
 
|}
 
|}
 +
    
==Identification of Causal Emergence==
 
==Identification of Causal Emergence==
第565行: 第572行:     
These experiments show that NIS+ can not only identify causal emergence in data, discover emergent macroscopic dynamics and coarse-graining strategies, but also other experiments show that the [[NIS+]] model can also increase the out-of-distribution generalization ability of the model through EI maximization.
 
These experiments show that NIS+ can not only identify causal emergence in data, discover emergent macroscopic dynamics and coarse-graining strategies, but also other experiments show that the [[NIS+]] model can also increase the out-of-distribution generalization ability of the model through EI maximization.
 +
    
==Applications==
 
==Applications==
第707行: 第715行:     
For example, taking the concept of [[five elements]] in Eastern philosophy, we can completely understand the [[five elements]] as five macroscopic states of everything, and the relationship of mutual generation and restraint of the [[five elements]] can be understood as a macroscopic causal mechanism between these five macroscopic states. Then, the cognitive process of extracting these five states of the [[five elements]] from everything is a coarse-graining process, which depends on the observer's ability to analogize. Therefore, the theory of five elements can be regarded as an abstract causal emergence theory for everything. Similarly, we can also apply the concept of causal emergence to more fields, including traditional Chinese medicine, divination, feng shui, etc. The common point of these applications will be that its causal mechanism is simpler and possibly has stronger causality compared to Western science, but the process of obtaining such an abstract coarse-graining is more complex and more dependent on experienced abstractors. This explains why Eastern philosophies all emphasize the self-cultivation of practitioners. This is because these Eastern philosophical theories put huge complexity and computational burden on '''analogical thinking'''.
 
For example, taking the concept of [[five elements]] in Eastern philosophy, we can completely understand the [[five elements]] as five macroscopic states of everything, and the relationship of mutual generation and restraint of the [[five elements]] can be understood as a macroscopic causal mechanism between these five macroscopic states. Then, the cognitive process of extracting these five states of the [[five elements]] from everything is a coarse-graining process, which depends on the observer's ability to analogize. Therefore, the theory of five elements can be regarded as an abstract causal emergence theory for everything. Similarly, we can also apply the concept of causal emergence to more fields, including traditional Chinese medicine, divination, feng shui, etc. The common point of these applications will be that its causal mechanism is simpler and possibly has stronger causality compared to Western science, but the process of obtaining such an abstract coarse-graining is more complex and more dependent on experienced abstractors. This explains why Eastern philosophies all emphasize the self-cultivation of practitioners. This is because these Eastern philosophical theories put huge complexity and computational burden on '''analogical thinking'''.
 +
    
==Critique==
 
==Critique==
第741行: 第750行:     
However, as pointed out in the literature <ref name=":6" />, the above problem can be alleviated by considering the error factor of the model while maximizing EI in the continuous variable space. However, although machine learning techniques facilitate the learning of causal relationships and causal mechanisms and the identification of emergent properties, it is important whether the results obtained through machine learning reflect ontological causality and emergence, or are they just an epistemological phenomenon? This is still undecided. Although the introduction of machine learning does not necessarily solve the debate around ontological and epistemological causality and emergence, it can provide a dependence that helps reduce subjectivity. This is because the machine learning agent can be regarded as an "objective" observer who makes judgments about causality and emergence that are independent of human observers. However, the problem of a unique solution still exists in this method. Is the result of machine learning ontological or epistemological? The answer is that the result is epistemological, where the epistemic subject is the machine learning algorithm. However, this does not mean that all results of machine learning are meaningless, because if the learning subject is well trained and the defined mathematical objective is effectively optimized, then the result can also be considered objective because the algorithm itself is objective and transparent. Combining machine learning methods can help us establish a theoretical framework for observers and study the interaction between observers and the corresponding observed complex systems.
 
However, as pointed out in the literature <ref name=":6" />, the above problem can be alleviated by considering the error factor of the model while maximizing EI in the continuous variable space. However, although machine learning techniques facilitate the learning of causal relationships and causal mechanisms and the identification of emergent properties, it is important whether the results obtained through machine learning reflect ontological causality and emergence, or are they just an epistemological phenomenon? This is still undecided. Although the introduction of machine learning does not necessarily solve the debate around ontological and epistemological causality and emergence, it can provide a dependence that helps reduce subjectivity. This is because the machine learning agent can be regarded as an "objective" observer who makes judgments about causality and emergence that are independent of human observers. However, the problem of a unique solution still exists in this method. Is the result of machine learning ontological or epistemological? The answer is that the result is epistemological, where the epistemic subject is the machine learning algorithm. However, this does not mean that all results of machine learning are meaningless, because if the learning subject is well trained and the defined mathematical objective is effectively optimized, then the result can also be considered objective because the algorithm itself is objective and transparent. Combining machine learning methods can help us establish a theoretical framework for observers and study the interaction between observers and the corresponding observed complex systems.
 +
    
==Related research fields==
 
==Related research fields==
第795行: 第805行:     
For specific methods of coarse-graining Markov chains, please refer to [[coarse-graining of Markov chains]].
 
For specific methods of coarse-graining Markov chains, please refer to [[coarse-graining of Markov chains]].
 +
    
==References==
 
==References==
156

个编辑

导航菜单