更改

跳到导航 跳到搜索
删除8字节 、 2024年11月13日 (星期三)
第407行: 第407行:       −
[[文件:Architectures for learning causal emergent representations1.png|无|缩略图|替代=|600x600像素]]
+
[[文件:Architectures for learning causal emergent representations1.png|居左|替代=|600x600像素]]
      第415行: 第415行:     
The results show that in the simple example shown in Figure (b), by maximizing <math>\mathrm{\Psi}</math> through the model constructed in Figure a, the experiment finds that the learned <math>\mathrm{\Psi}</math> is approximately equal to the true groundtruth <math>\mathrm{\Psi}</math>, verifying the effectiveness of model learning. This system can correctly judge the occurrence of causal emergence. However, this method also has the problem of being difficult to deal with complex multivariate situations. This is because the number of neural networks on the right side of the figure is proportional to the number of macroscopic and microscopic variable pairs. Therefore, the more the number of microscopic variables (dimensions), the more the number of neural networks will increase proportionally, which will lead to an increase in computational complexity. In addition, this method is only tested on very few cases, so it cannot be scaled up yet. Finally, more importantly, because the network calculates the approximate index of causal emergence and obtains a sufficient but not necessary condition for emergence, the various drawbacks of the above approximate algorithm will be inherited by this method.
 
The results show that in the simple example shown in Figure (b), by maximizing <math>\mathrm{\Psi}</math> through the model constructed in Figure a, the experiment finds that the learned <math>\mathrm{\Psi}</math> is approximately equal to the true groundtruth <math>\mathrm{\Psi}</math>, verifying the effectiveness of model learning. This system can correctly judge the occurrence of causal emergence. However, this method also has the problem of being difficult to deal with complex multivariate situations. This is because the number of neural networks on the right side of the figure is proportional to the number of macroscopic and microscopic variable pairs. Therefore, the more the number of microscopic variables (dimensions), the more the number of neural networks will increase proportionally, which will lead to an increase in computational complexity. In addition, this method is only tested on very few cases, so it cannot be scaled up yet. Finally, more importantly, because the network calculates the approximate index of causal emergence and obtains a sufficient but not necessary condition for emergence, the various drawbacks of the above approximate algorithm will be inherited by this method.
      
====Neural Information Compression Method====
 
====Neural Information Compression Method====
3,005

个编辑

导航菜单