更改

跳到导航 跳到搜索
删除120字节 、 2024年11月13日 (星期三)
第204行: 第204行:       −
[[文件:向下因果与因果解耦2.png|居左|300x300像素|因果解耦与向下因果]]
+
[[文件:因果涌现关系图.png||缩略图]]
 +
 
      第234行: 第235行:     
=====Specific Example=====
 
=====Specific Example=====
[[文件:因果解耦以及向下因果例子1.png|500x500像素|居左|因果解耦以及向下因果例子]]
+
[[文件:Examples of causal decoupling and downward causation.png||缩略图]]
      第406行: 第407行:  
=====Machine Learning-based Method=====
 
=====Machine Learning-based Method=====
 
Kaplanis et al. <ref name=":2" /> based on the theoretical method of [[representation learning]], use an algorithm to spontaneously learn the macroscopic state variable <math>V</math> by maximizing <math>\mathrm{\Psi}</math> (i.e., Equation {{EquationNote|1}}). Specifically, the authors use a neural network <math>f_{\theta}</math> to learn the representation function that coarsens the microscopic input <math>X_t</math> into the macroscopic output <math>V_t</math>, and at the same time use neural networks <math>g_{\phi}</math> and <math>h_{\xi}</math> to learn the calculation of mutual information such as <math>I(V_t;V_{t + 1})</math> and <math>\sum_i(I(V_{t + 1};X_{t}^i))</math> respectively. Finally, this method optimizes the neural network by maximizing the difference between the two (i.e., <math>\mathrm{\Psi}</math>). The architecture diagram of this neural network system is shown in Figure a below.
 
Kaplanis et al. <ref name=":2" /> based on the theoretical method of [[representation learning]], use an algorithm to spontaneously learn the macroscopic state variable <math>V</math> by maximizing <math>\mathrm{\Psi}</math> (i.e., Equation {{EquationNote|1}}). Specifically, the authors use a neural network <math>f_{\theta}</math> to learn the representation function that coarsens the microscopic input <math>X_t</math> into the macroscopic output <math>V_t</math>, and at the same time use neural networks <math>g_{\phi}</math> and <math>h_{\xi}</math> to learn the calculation of mutual information such as <math>I(V_t;V_{t + 1})</math> and <math>\sum_i(I(V_{t + 1};X_{t}^i))</math> respectively. Finally, this method optimizes the neural network by maximizing the difference between the two (i.e., <math>\mathrm{\Psi}</math>). The architecture diagram of this neural network system is shown in Figure a below.
 +
[[文件:Architectures for learning causal emergent representations1.png|无|缩略图]]
   −
  −
[[文件:学习因果涌现表征的架构.png|居左|600x600像素|学习因果涌现表征的架构]]
        第454行: 第454行:  
=====NIS=====
 
=====NIS=====
 
To identify causal emergence in the system, the author proposes a [[neural information squeezer]] (NIS) neural network architecture <ref name="NIS" />. This architecture is based on an encoder-dynamics learner-decoder framework, that is, the model consists of three parts, which are respectively used for coarse-graining the original data to obtain the macroscopic state, fitting the macroscopic dynamics and inverse coarse-graining operation (decoding the macroscopic state combined with random noise into the microscopic state). Among them, the authors use [[invertible neural network]] (INN) to construct the encoder (Encoder) and decoder (Decoder), which approximately correspond to the coarse-graining function [math]\phi[/math] and the inverse coarse-graining function [math]\phi^{\dagger}[/math] respectively. The reason for using [[invertible neural network]] is that we can simply invert this network to obtain the inverse coarse-graining function (i.e., [math]\phi^{\dagger}\approx \phi^{-1}[/math]). This model framework can be regarded as a neural information compressor. It puts the microscopic state data containing noise into a narrow information channel, compresses it into a macroscopic state, discards useless information, so that the causality of macroscopic dynamics is stronger, and then decodes it into a prediction of the microscopic state. The model framework of the NIS method is shown in the following figure:
 
To identify causal emergence in the system, the author proposes a [[neural information squeezer]] (NIS) neural network architecture <ref name="NIS" />. This architecture is based on an encoder-dynamics learner-decoder framework, that is, the model consists of three parts, which are respectively used for coarse-graining the original data to obtain the macroscopic state, fitting the macroscopic dynamics and inverse coarse-graining operation (decoding the macroscopic state combined with random noise into the microscopic state). Among them, the authors use [[invertible neural network]] (INN) to construct the encoder (Encoder) and decoder (Decoder), which approximately correspond to the coarse-graining function [math]\phi[/math] and the inverse coarse-graining function [math]\phi^{\dagger}[/math] respectively. The reason for using [[invertible neural network]] is that we can simply invert this network to obtain the inverse coarse-graining function (i.e., [math]\phi^{\dagger}\approx \phi^{-1}[/math]). This model framework can be regarded as a neural information compressor. It puts the microscopic state data containing noise into a narrow information channel, compresses it into a macroscopic state, discards useless information, so that the causality of macroscopic dynamics is stronger, and then decodes it into a prediction of the microscopic state. The model framework of the NIS method is shown in the following figure:
 
+
[[文件:The framework diagram of the NIS model.png||缩略图]]
 
  −
[[文件:NIS模型框架图.png|居左|500x500像素|替代=NIS模型框架图|NIS模型框架图]]
       
156

个编辑

导航菜单