| Kaplanis et al. <ref name=":2" /> based on the theoretical method of [[representation learning]], use an algorithm to spontaneously learn the macroscopic state variable <math>V</math> by maximizing <math>\mathrm{\Psi}</math> (i.e., Equation {{EquationNote|1}}). Specifically, the authors use a neural network <math>f_{\theta}</math> to learn the representation function that coarsens the microscopic input <math>X_t</math> into the macroscopic output <math>V_t</math>, and at the same time use neural networks <math>g_{\phi}</math> and <math>h_{\xi}</math> to learn the calculation of mutual information such as <math>I(V_t;V_{t + 1})</math> and <math>\sum_i(I(V_{t + 1};X_{t}^i))</math> respectively. Finally, this method optimizes the neural network by maximizing the difference between the two (i.e., <math>\mathrm{\Psi}</math>). The architecture diagram of this neural network system is shown in Figure a below. | | Kaplanis et al. <ref name=":2" /> based on the theoretical method of [[representation learning]], use an algorithm to spontaneously learn the macroscopic state variable <math>V</math> by maximizing <math>\mathrm{\Psi}</math> (i.e., Equation {{EquationNote|1}}). Specifically, the authors use a neural network <math>f_{\theta}</math> to learn the representation function that coarsens the microscopic input <math>X_t</math> into the macroscopic output <math>V_t</math>, and at the same time use neural networks <math>g_{\phi}</math> and <math>h_{\xi}</math> to learn the calculation of mutual information such as <math>I(V_t;V_{t + 1})</math> and <math>\sum_i(I(V_{t + 1};X_{t}^i))</math> respectively. Finally, this method optimizes the neural network by maximizing the difference between the two (i.e., <math>\mathrm{\Psi}</math>). The architecture diagram of this neural network system is shown in Figure a below. |
| To identify causal emergence in the system, the author proposes a [[neural information squeezer]] (NIS) neural network architecture <ref name="NIS" />. This architecture is based on an encoder-dynamics learner-decoder framework, that is, the model consists of three parts, which are respectively used for coarse-graining the original data to obtain the macroscopic state, fitting the macroscopic dynamics and inverse coarse-graining operation (decoding the macroscopic state combined with random noise into the microscopic state). Among them, the authors use [[invertible neural network]] (INN) to construct the encoder (Encoder) and decoder (Decoder), which approximately correspond to the coarse-graining function [math]\phi[/math] and the inverse coarse-graining function [math]\phi^{\dagger}[/math] respectively. The reason for using [[invertible neural network]] is that we can simply invert this network to obtain the inverse coarse-graining function (i.e., [math]\phi^{\dagger}\approx \phi^{-1}[/math]). This model framework can be regarded as a neural information compressor. It puts the microscopic state data containing noise into a narrow information channel, compresses it into a macroscopic state, discards useless information, so that the causality of macroscopic dynamics is stronger, and then decodes it into a prediction of the microscopic state. The model framework of the NIS method is shown in the following figure: | | To identify causal emergence in the system, the author proposes a [[neural information squeezer]] (NIS) neural network architecture <ref name="NIS" />. This architecture is based on an encoder-dynamics learner-decoder framework, that is, the model consists of three parts, which are respectively used for coarse-graining the original data to obtain the macroscopic state, fitting the macroscopic dynamics and inverse coarse-graining operation (decoding the macroscopic state combined with random noise into the microscopic state). Among them, the authors use [[invertible neural network]] (INN) to construct the encoder (Encoder) and decoder (Decoder), which approximately correspond to the coarse-graining function [math]\phi[/math] and the inverse coarse-graining function [math]\phi^{\dagger}[/math] respectively. The reason for using [[invertible neural network]] is that we can simply invert this network to obtain the inverse coarse-graining function (i.e., [math]\phi^{\dagger}\approx \phi^{-1}[/math]). This model framework can be regarded as a neural information compressor. It puts the microscopic state data containing noise into a narrow information channel, compresses it into a macroscopic state, discards useless information, so that the causality of macroscopic dynamics is stronger, and then decodes it into a prediction of the microscopic state. The model framework of the NIS method is shown in the following figure: |