更改

跳到导航 跳到搜索
添加520字节 、 2024年9月10日 (星期二)
无编辑摘要
第744行: 第744行:  
Here [math]\mathcal{U}([-\frac{L}{2},\frac{L}{2}]^n)[/math] represents the uniform distribution on the cube [math][-\frac{L}{2},\frac{L}{2}]^n[/math]. From this, it can be seen that although L is still implicitly included in the expectation, all terms in the equation that explicitly contain L are eliminated. In actual numerical calculations, the expected calculation can be manifested as taking the average of multiple samples on [math][-\frac{L}{2},\frac{L}{2}]^n[/math], and therefore is also independent of the size of L. This demonstrates the rationality of introducing dimension averaged EI.
 
Here [math]\mathcal{U}([-\frac{L}{2},\frac{L}{2}]^n)[/math] represents the uniform distribution on the cube [math][-\frac{L}{2},\frac{L}{2}]^n[/math]. From this, it can be seen that although L is still implicitly included in the expectation, all terms in the equation that explicitly contain L are eliminated. In actual numerical calculations, the expected calculation can be manifested as taking the average of multiple samples on [math][-\frac{L}{2},\frac{L}{2}]^n[/math], and therefore is also independent of the size of L. This demonstrates the rationality of introducing dimension averaged EI.
   −
==随机迭代系统==
+
==Random Iterative Systems==
我们可以把上述结论,推广到线性迭代动力系统中,也就是对于形如
+
We can extend the above results to linear iterative dynamical systems. For an iterative system of the form:
    
<math>
 
<math>
第751行: 第751行:  
</math>
 
</math>
   −
的迭代系统,其中,[math]A\in\mathcal{R}^{n\times n}[/math]是尺度为n*n的满秩的方阵,代表线性迭代系统中的动力学系数, [math]\varepsilon_t\sim\mathcal{N}(0,\Sigma)[/math]为n维的高斯噪声,满足0均值,协方差为[math]\Sigma[/math]的正态分布,其中,协方差矩阵[math]\Sigma[/math]也是满秩的。
+
Where [math]A\in\mathcal{R}^{n\times n}[/math] is a full-rank n*n matrix representing the dynamics of the linear iterative system, and [math]\varepsilon_t\sim\mathcal{N}(0,\Sigma)[/math] is n-dimensional Gaussian noise with mean zero and covariance matrix [math]\Sigma[/math]. Among them, the covariance matrix [math]\Sigma[/math] is also full rank.
    
可以看出这一迭代系统可以看做是公式{{EquationNote|5}}的特例,其中[math]y[/math]对应这里的[math]x_{t+1}[/math],[math]f(x_t)[/math]即是[math]A x_t[/math]。
 
可以看出这一迭代系统可以看做是公式{{EquationNote|5}}的特例,其中[math]y[/math]对应这里的[math]x_{t+1}[/math],[math]f(x_t)[/math]即是[math]A x_t[/math]。
    
为定义EI,设干预空间大小为<math>L</math>,对于单步的映射我们可以得到维度平均有效信息
 
为定义EI,设干预空间大小为<math>L</math>,对于单步的映射我们可以得到维度平均有效信息
 +
 +
, the EI for a single step mapping is:
    
<math>
 
<math>
第792行: 第794行:     
其中,[math]W[/math]为粗粒化矩阵,它的阶数为n*m,m为宏观状态空间的维度,它的作用是把任意的微观态[math]x_t[/math]映射为宏观态[math]y_t[/math]。[math]W^{\dagger}[/math]为W的伪逆运算。式中第一项是由确定性引发的涌现,简称'''确定性涌现'''(Determinism Emergence),第二项为简并性引发的涌现,简称'''简并性涌现'''。更详细的内容参看[[随机迭代系统的因果涌现]]。
 
其中,[math]W[/math]为粗粒化矩阵,它的阶数为n*m,m为宏观状态空间的维度,它的作用是把任意的微观态[math]x_t[/math]映射为宏观态[math]y_t[/math]。[math]W^{\dagger}[/math]为W的伪逆运算。式中第一项是由确定性引发的涌现,简称'''确定性涌现'''(Determinism Emergence),第二项为简并性引发的涌现,简称'''简并性涌现'''。更详细的内容参看[[随机迭代系统的因果涌现]]。
 +
 +
The effective information of a random iterative system can be decomposed into two terms: determinism and degeneracy.
 +
 +
* '''Determinism''' describes the predictability of the system's future state based on the current state.
 +
* '''Degeneracy''' describes the ability to trace back the previous state from the current state.
 +
 +
The stronger the determinism and the weaker the degeneracy, the greater the effective information, leading to stronger causal effects.
 +
 
==前馈神经网络==
 
==前馈神经网络==
 
针对复杂系统自动建模任务,我们往往使用神经网络来建模系统动力学。具体的,对于前馈神经网络来说,[[张江]]等人推导出了前馈神经网络有效信息的计算公式<ref name="zhang_nis">{{cite journal|title=Neural Information Squeezer for Causal Emergence|first1=Jiang|last1=Zhang|first2=Kaiwei|last2=Liu|journal=Entropy|year=2022|volume=25|issue=1|page=26|url=https://api.semanticscholar.org/CorpusID:246275672}}</ref>,其中神经网络的输入是<math>x(x_1,...,x_n)</math>,输出是<math>y(y_1,...,y_n)</math>,并且满足<math>y=f(x)</math>,<math>f</math>是由神经网络实现的确定性映射。然而,根据公式{{EquationNote|5}},映射中必须包含噪声才能够体现不确定性。
 
针对复杂系统自动建模任务,我们往往使用神经网络来建模系统动力学。具体的,对于前馈神经网络来说,[[张江]]等人推导出了前馈神经网络有效信息的计算公式<ref name="zhang_nis">{{cite journal|title=Neural Information Squeezer for Causal Emergence|first1=Jiang|last1=Zhang|first2=Kaiwei|last2=Liu|journal=Entropy|year=2022|volume=25|issue=1|page=26|url=https://api.semanticscholar.org/CorpusID:246275672}}</ref>,其中神经网络的输入是<math>x(x_1,...,x_n)</math>,输出是<math>y(y_1,...,y_n)</math>,并且满足<math>y=f(x)</math>,<math>f</math>是由神经网络实现的确定性映射。然而,根据公式{{EquationNote|5}},映射中必须包含噪声才能够体现不确定性。
2,510

个编辑

导航菜单