更改

跳到导航 跳到搜索
添加165字节 、 2024年11月1日 (星期五)
无编辑摘要
第420行: 第420行:     
====Neural Information Compression Method====
 
====Neural Information Compression Method====
In recent years, emerging artificial intelligence technologies have overcome a series of major problems. At the same time, machine learning methods are equipped with various carefully designed neural network structures and automatic differentiation technologies, which can approximate any function in a huge function space. Therefore, Zhang Jiang et al. tried to propose a data-driven method based on neural networks to identify causal emergence from time series data [46][40]. This method can automatically extract effective coarse-graining strategies and macroscopic dynamics, overcoming various deficiencies of the Rosas method [37].
+
In recent years, emerging artificial intelligence technologies have overcome a series of major problems. At the same time, machine learning methods are equipped with various carefully designed neural network structures and automatic differentiation technologies, which can approximate any function in a huge function space. Therefore, Zhang Jiang et al. tried to propose a data-driven method based on neural networks to identify causal emergence from time series data <ref name="NIS">Zhang J, Liu K. Neural information squeezer for causal emergence[J]. Entropy, 2022, 25(1): 26.</ref><ref name=":6" />. This method can automatically extract effective coarse-graining strategies and macroscopic dynamics, overcoming various deficiencies of the Rosas method <ref name=":5" />.
      第483行: 第483行:       −
However, if we directly optimize the dimension-averaged effective information, there will be certain difficulties. The article [46] does not directly optimize Equation {{EquationNote|1}}, but adopts a clever method. To solve this problem, the author divides the optimization process into two stages. The first stage is to minimize the microscopic state prediction error under the condition of a given macroscopic scale <math>q</math>, that is, <math>\min _{\phi, f_q, \phi^{\dagger}}\left\|\phi^{\dagger}(Y(t + 1)) - X_{t + 1}\right\|<\epsilon</math> and obtain the optimal macroscopic state dynamics [math]f_q^\ast[/math]; the second stage is to search for the hyperparameter <math>q</math> to maximize the effective information [math]\mathcal{J}[/math], that is, <math>\max_{q}\mathcal{J}(f_{q}^\ast)</math>. Practice has proved that this method can effectively find macroscopic dynamics and coarse-graining functions, but it cannot truly maximize EI in advance.
+
However, if we directly optimize the dimension-averaged effective information, there will be certain difficulties. The article <ref name="NIS" /> does not directly optimize Equation {{EquationNote|1}}, but adopts a clever method. To solve this problem, the author divides the optimization process into two stages. The first stage is to minimize the microscopic state prediction error under the condition of a given macroscopic scale <math>q</math>, that is, <math>\min _{\phi, f_q, \phi^{\dagger}}\left\|\phi^{\dagger}(Y(t + 1)) - X_{t + 1}\right\|<\epsilon</math> and obtain the optimal macroscopic state dynamics [math]f_q^\ast[/math]; the second stage is to search for the hyperparameter <math>q</math> to maximize the effective information [math]\mathcal{J}[/math], that is, <math>\max_{q}\mathcal{J}(f_{q}^\ast)</math>. Practice has proved that this method can effectively find macroscopic dynamics and coarse-graining functions, but it cannot truly maximize EI in advance.
      第523行: 第523行:     
=====NIS+=====
 
=====NIS+=====
Although NIS took the lead in proposing a scheme to optimize EI to identify causal emergence in data, this method has some shortcomings: the author divides the optimization process into two stages, but does not truly maximize the effective information, that is, Equation {{EquationNote|1}}. Therefore, Yang Mingzhe et al. [40] further improved this method and proposed the NIS+ scheme. By introducing reverse dynamics and reweighting technique, the original maximization of effective information is transformed into maximizing its variational lower bound by means of variational inequality to directly optimize the objective function.
+
Although NIS took the lead in proposing a scheme to optimize EI to identify causal emergence in data, this method has some shortcomings: the author divides the optimization process into two stages, but does not truly maximize the effective information, that is, Equation {{EquationNote|1}}. Therefore, Yang Mingzhe et al. <ref name=":6" /> further improved this method and proposed the NIS+ scheme. By introducing reverse dynamics and reweighting technique, the original maximization of effective information is transformed into maximizing its variational lower bound by means of variational inequality to directly optimize the objective function.
     
156

个编辑

导航菜单