更改

添加2,993字节 、 2024年12月1日 (星期日)
无编辑摘要
第711行: 第711行:     
==Critique==
 
==Critique==
 +
Although the theory of causal emergence based on maximizing effective information, first proposed by Erik Hoel and others, has well characterized the phenomenon of causal emergence and has been widely applied in many actual systems, some scholars have pointed out that this theory still has many flaws and drawbacks. This is mainly reflected in the philosophical level and the technical level of the coarse-graining of Markov dynamical systems.
 +
 +
 +
===From the philosophical aspect===
 
Throughout history, there has been a long-standing debate on the [[ontological]] and [[epistemological]] aspects of causality and emergence.
 
Throughout history, there has been a long-standing debate on the [[ontological]] and [[epistemological]] aspects of causality and emergence.
      −
For example, Yurchenko pointed out in the literature <ref>Yurchenko, S. B. (2023). Can there be a synergistic core emerging in the brain hierarchy to control neural activity by downward causation?. Authorea Preprints.</ref> that the concept of "causation" is often ambiguous and should be distinguished into two different concepts of "cause" and "reason", which respectively conform to ontological and epistemological causality. Among them, cause refers to the real cause that fully leads to the result, while reason is only the observer's explanation of the result. Reason may not be as strict as a real cause, but it does provide a certain degree of [[predictability]]. Similarly, there is also a debate about the nature of causal emergence.
+
For example, Yurchenko pointed out in the literature <ref>Yurchenko, S. B. (2023). Can there be a synergistic core emerging in the brain hierarchy to control neural activity by downward causation?. Authorea Preprints.</ref> that the concept of "causation" is often ambiguous and should be distinguished into two different concepts of '''"cause"''' and '''"reason"''', which respectively conform to ontological and epistemological causality. Among them, cause refers to the real cause that fully leads to the result, while reason is only the observer's explanation of the result. Reason may not be as strict as a real cause, but it does provide a certain degree of [[predictability]]. Similarly, there is also a debate about the nature of causal emergence.
      第723行: 第727行:       −
In addition, Hoel's <math>EI</math> calculation and the quantification of causal emergence depend on two known prerequisite factors: (1) known microscopic dynamics; (2) known coarse-graining scheme. However, in practice, people rarely can obtain both of these factors at the same time, especially in observational studies, these two factors may be unknown. Therefore, this limitation hinders the practical applicability of Hoel's theory.
+
===From the technical aspect===
 +
====Non-uniqueness====
 +
The result of causal emergence is defined as the difference between the Effective Information (EI) of the macroscopic dynamics after coarse-graining and the EI of the original microscopic dynamics. Therefore, this result obviously depends on the choice of the coarse-graining scheme. To eliminate this uncertainty, Hoel and others introduced the concept of maximizing Effective Information, that is, the judgment and measurement of causal emergence in a system are based on maximizing the EI of the macroscopic dynamics. However, there is currently no theoretical guarantee that this way of maximizing the EI of the macroscopic dynamics can ensure the uniqueness of the coarse-graining scheme. In other words, it is entirely possible that multiple coarse-graining schemes will correspond to the same EI of the macroscopic dynamics. In fact, studies on continuous mapping dynamical systems have already shown that there are infinitely many possibilities for the solutions of maximizing EI <ref name="exact">{{cite journal |last1=Liu|first1=K.W.|last2=Yuan|first2=B.|last3=Zhang|first3=J.|title=An Exact Theory of Causal Emergence for Linear Stochastic Iteration Systems|journal=Entropy|volume=26 |issue=8 |year=2024|page=618}}</ref>.
 +
 
 +
 
 +
====Ambiguity====
 +
At the same time, it is pointed out that Hoel's theory ignores the constraints on the coarse-graining method, and some coarse-graining methods may lead to ambiguity and irrationality<ref>Eberhardt, F., & Lee, L. L. (2022). Causal emergence: When distortions in a map obscure the territory. Philosophies, 7(2), 30.</ref>.
 +
 
 +
 
 +
First of all, <ref name="Eberhardt"/> points out that if there are no constraints on the coarse-graining scheme, ambiguity may arise when coarse-graining the transition probability matrix (TPM) (merging states and adding up probabilities). For example, when the two row vectors in the TPM corresponding to the two states to be merged are very dissimilar, forcibly merging them (for example, by taking the average) will cause ambiguity. This ambiguity is mainly manifested in the question of what exactly the intervention on the merged macroscopic state means. Since the row vectors are dissimilar, the intervention on the merged macroscopic state cannot be simply reduced to the intervention on the microscopic states. If the intervention on the macroscopic state is forcibly converted into the intervention on the microscopic states by taking the average, the differences between the microscopic states are ignored. At the same time, new contradictory problems of non-commutativity will also be triggered.
      −
At the same time, it is pointed out that Hoel's theory ignores the constraints on the coarse-graining method, and some coarse-graining methods may lead to ambiguity <ref>Eberhardt, F., & Lee, L. L. (2022). Causal emergence: When distortions in a map obscure the territory. Philosophies, 7(2), 30.</ref>. In addition, some combinations of state coarse-graining operations and time coarse-graining operations do not exhibit [[commutativity]]. For example, assume that <math>A_{m\times n}</math> is a state coarse-graining operation (combining n states into m states). Here, the coarse-graining strategy is the strategy that maximizes the effective information of the macroscopic state transition matrix. <math>(\cdot) \times (\cdot)</math> is a time coarse-graining operation (combining two time steps into one). In this way, [math]A_{m\times n}(TPM_{n\times n})[/math] is to perform coarse-graining on a [math]n\times n[/math] TPM, and the coarse-graining process is simplified as the product of matrix [math]A[/math] and matrix [math]TPM[/math].
+
====Non-commutativity====
 +
If the two dissimilar row vectors are forcibly averaged, the resulting coarse-grained TPM may break the commutativity between the abstraction operation (i.e., coarse-graining) and marginalization (i.e., the time evolution operator). For example, assume that <math>A_{m\times n}</math> is a state coarse-graining operation (combining n states into m states). Here, the coarse-graining strategy is the strategy that maximizes the effective information of the macroscopic state transition matrix. <math>(\cdot) \times (\cdot)</math> is a time coarse-graining operation (combining two time steps into one). In this way, [math]A_{m\times n}(TPM_{n\times n})[/math] is to perform coarse-graining on a [math]n\times n[/math] TPM, and the coarse-graining process is simplified as the product of matrix [math]A[/math] and matrix [math]TPM[/math].
      第742行: 第756行:       −
However, as pointed out in the literature <ref name=":6" />, the above problem can be alleviated by considering the error factor of the model while maximizing EI in the continuous variable space. However, although machine learning techniques facilitate the learning of causal relationships and causal mechanisms and the identification of emergent properties, it is important whether the results obtained through machine learning reflect ontological causality and emergence, or are they just an epistemological phenomenon? This is still undecided. Although the introduction of machine learning does not necessarily solve the debate around ontological and epistemological causality and emergence, it can provide a dependence that helps reduce subjectivity. This is because the machine learning agent can be regarded as an "objective" observer who makes judgments about causality and emergence that are independent of human observers. However, the problem of a unique solution still exists in this method. Is the result of machine learning ontological or epistemological? The answer is that the result is epistemological, where the epistemic subject is the machine learning algorithm. However, this does not mean that all results of machine learning are meaningless, because if the learning subject is well trained and the defined mathematical objective is effectively optimized, then the result can also be considered objective because the algorithm itself is objective and transparent. Combining machine learning methods can help us establish a theoretical framework for observers and study the interaction between observers and the corresponding observed complex systems.
+
However, as pointed out in the literature <ref name=":6" />, the above problem can be alleviated by considering the error factor of the model while maximizing EI in the continuous variable space.  
 +
 
 +
 
 +
====Other problems and remedies====
 +
In addition, Hoel's <math>EI</math> calculation and the quantification of causal emergence depend on two known prerequisite factors: (1) known microscopic dynamics; (2) known coarse-graining scheme. However, in practice, people rarely can obtain both of these factors at the same time, especially in observational studies, these two factors may be unknown. Therefore, this limitation hinders the practical applicability of Hoel's theory.
 +
 
 +
 
 +
However, although machine learning techniques facilitate the learning of causal relationships and causal mechanisms and the identification of emergent properties, it is important whether the results obtained through machine learning reflect ontological causality and emergence, or are they just an epistemological phenomenon? This is still undecided. Although the introduction of machine learning does not necessarily solve the debate around ontological and epistemological causality and emergence, it can provide a dependence that helps reduce subjectivity. This is because the machine learning agent can be regarded as an "objective" observer who makes judgments about causality and emergence that are independent of human observers. However, the problem of a unique solution still exists in this method. Is the result of machine learning ontological or epistemological? The answer is that the result is epistemological, where the epistemic subject is the machine learning algorithm. However, this does not mean that all results of machine learning are meaningless, because if the learning subject is well trained and the defined mathematical objective is effectively optimized, then the result can also be considered objective because the algorithm itself is objective and transparent. Combining machine learning methods can help us establish a theoretical framework for observers and study the interaction between observers and the corresponding observed complex systems.
     
156

个编辑