更改

跳到导航 跳到搜索
添加194字节 、 2024年9月11日 (星期三)
无编辑摘要
第844行: 第844行:  
Here, σi​ represents the mean squared error (MSE) for the i-th dimension of the neural network. The inverse of this matrix is denoted as <code>sigmas_matrix</code>. The mapping function f is represented by <code>func</code>. The following code can be used to calculate the EI for this neural network. The basic idea of the algorithm is to use the Monte Carlo method to calculate the integral in Equation 6.
 
Here, σi​ represents the mean squared error (MSE) for the i-th dimension of the neural network. The inverse of this matrix is denoted as <code>sigmas_matrix</code>. The mapping function f is represented by <code>func</code>. The following code can be used to calculate the EI for this neural network. The basic idea of the algorithm is to use the Monte Carlo method to calculate the integral in Equation 6.
 
*Input Variables:
 
*Input Variables:
input_size: Dimension of the neural network's input
+
input_size: Dimension of the neural network's input, output_size: Dimension of the output, sigmas_matrix: Inverse of the covariance matrix of the output, assumed to follow a Gaussian distribution, func: Mapping function, L: Size of the intervention interval, num_samples: Number of samples for the Monte Carlo integration
output_size: Dimension of the output
  −
sigmas_matrix: Inverse of the covariance matrix of the output, assumed to follow a Gaussian distribution
  −
func: Mapping function
  −
L: Size of the intervention interval
  −
num_samples: Number of samples for the Monte Carlo integration
   
*Output Variables:
 
*Output Variables:
d_EI: Dimension-averaged EI
+
d_EI: Dimension-averaged EI, eff: EI coefficient, EI: Effective Information, term1: Determinism, term2: Degeneracy, [math]\ln L[/math](-np.log(rho))
eff: EI coefficient
  −
EI: Effective Information
  −
term1: Determinism
  −
term2: Degeneracy
  −
[math]\ln L[/math](-np.log(rho))
   
<syntaxhighlight lang="python3">
 
<syntaxhighlight lang="python3">
 
def approx_ei(input_size, output_size, sigmas_matrix, func, num_samples, L, easy=True, device=None):
 
def approx_ei(input_size, output_size, sigmas_matrix, func, num_samples, L, easy=True, device=None):
第909行: 第899行:  
     return math.log(abs(sl.det(A))/(np.sqrt(sl.det(Sigma))*math.pow(2*np.pi*np.e,n/2)))
 
     return math.log(abs(sl.det(A))/(np.sqrt(sl.det(Sigma))*math.pow(2*np.pi*np.e,n/2)))
 
</syntaxhighlight>
 
</syntaxhighlight>
=EI与其它相关主题=
+
=EI and Other Related Topics=
==EI与整合信息论==
+
==EI and Integrated Information Theory==
有效信息这一指标最早出现在文献Tononi等人(2003)的文章中<ref name="tononi_2003">{{cite journal |last1=Tononi|first1=G.|last2=Sporns|first2=O.|title=Measuring information integration|journal=BMC Neuroscience|volume=4 |issue=31 |year=2003|url=https://doi.org/10.1186/1471-2202-4-31}}</ref>,在这篇文章中,作者们定义了[[整合信息能力]]这一指标,并建立了[[整合信息理论]],这一理论后来演化成意识理论的一个重要分支。而[[整合信息能力]]这一指标的定义是以有效信息为基础的。
+
The concept of Effective Information (EI) was first introduced in a paper <ref name="tononi_2003">{{cite journal |last1=Tononi|first1=G.|last2=Sporns|first2=O.|title=Measuring information integration|journal=BMC Neuroscience|volume=4 |issue=31 |year=2003|url=https://doi.org/10.1186/1471-2202-4-31}}</ref> by Tononi et al. (2003) In this article, the authors defined the indicator of the [[Integrated Information Ability]] and established the [[Integrated Information Theory (IIT)]], which later evolved into an important branch of consciousness theory. The definition of the indicator of the [[Integrated Information Ability]] is based on effective information.
===EI与Φ===
+
===EI and Φ===
整合信息能力(或者叫整合程度)<math>\Phi</math>,可以被定义为系统的任意一个二划分两部分之间相互影响的有效信息最小值。假如系统是X,S是X的一个子集,它被划分为两个部分,分别是A和B。那么,A、B之间以及它们跟X中其余的部分都存在着相互作用和因果关系。[[文件:OriginalEI.png|350x350px|整合信息论中的划分|替代=|缩略图|链接=https://wiki.swarma.org/index.php/%E6%96%87%E4%BB%B6:OriginalEI.png]]这时,我们可以度量这些因果关系的强弱。首先,我们来计算从A到B的有效信息。即干预A,使其服从最大熵分布,然后度量A和B之间的互信息:
+
The integrated information (or the degree of integration) <math>\Phi</math>, can be defined as the minimum value of EI between any two bipartitions of a system. Suppose the system is 𝑋, and 𝑆 is a subset of 𝑋, that is partitioned into two parts, 𝐴 and 𝐵. There are causal interactions between 𝐴, 𝐵, and the rest of 𝑋. [[文件:OriginalEI.png|350x350px|整合信息论中的划分|替代=|缩略图|链接=https://wiki.swarma.org/index.php/%E6%96%87%E4%BB%B6:OriginalEI.png]] In this scenario, we can measure the strength of these causal interactions. First, we calculate the EI from 𝐴 to 𝐵, i.e., we intervene on 𝐴 such that it follows the maximum entropy distribution, then measure the mutual information between 𝐴 and 𝐵:
 
   
<math>
 
<math>
 
EI(A\rightarrow B) = I(A^{H^{max}}: B)   
 
EI(A\rightarrow B) = I(A^{H^{max}}: B)   
2,435

个编辑

导航菜单