更改

添加41字节 、 2024年9月20日 (星期五)
无编辑摘要
第871行: 第871行:  
</math>
 
</math>
   −
Where x has a dimension of <code>input_size</code>, y has a dimension of <code>output_size</code>, and ξ is a Gaussian distribution with a covariance matrix:
+
Where x has a dimension of input_size, y has a dimension of output_size, and [math]\xi[/math] is a Gaussian distribution with a covariance matrix: [math]\Sigma=\mathrm{diag}(\sigma_1,\sigma_2,\cdots,\sigma_n)[/math] Here, [math]\sigma_i[/math] represents the mean squared error (MSE) for the i-th dimension of the neural network. The inverse of this matrix is denoted as sigmas_matrix. The mapping function f is represented by <code>func</code>. The following code can be used to calculate the EI for this neural network. The basic idea of the algorithm is to use the Monte Carlo method to calculate the integral in Equation {{EquationNote|6}}.
   −
Σ=diag(σ1​,σ2​,⋯,σn​)
  −
  −
Here, σi​ represents the mean squared error (MSE) for the i-th dimension of the neural network. The inverse of this matrix is denoted as <code>sigmas_matrix</code>. The mapping function f is represented by <code>func</code>. The following code can be used to calculate the EI for this neural network. The basic idea of the algorithm is to use the Monte Carlo method to calculate the integral in Equation 6.
   
*Input Variables:
 
*Input Variables:
 
input_size: Dimension of the neural network's input, output_size: Dimension of the output, sigmas_matrix: Inverse of the covariance matrix of the output, assumed to follow a Gaussian distribution, func: Mapping function, L: Size of the intervention interval, num_samples: Number of samples for the Monte Carlo integration
 
input_size: Dimension of the neural network's input, output_size: Dimension of the output, sigmas_matrix: Inverse of the covariance matrix of the output, assumed to follow a Gaussian distribution, func: Mapping function, L: Size of the intervention interval, num_samples: Number of samples for the Monte Carlo integration
第957行: 第954行:  
This defines the relationship between [[Integrated Information Ability]] and EI.
 
This defines the relationship between [[Integrated Information Ability]] and EI.
 
===Distinction===
 
===Distinction===
It’s important to note that unlike EI calculations for Markov chains, the EI here measures the causal connections between two parts of the system, rather than the strength of causal connections across two different time points in the same system.
+
It is important to note that unlike EI calculations for Markov chains, the EI here measures the causal connections between two parts of the system, rather than the strength of causal connections across two different time points in the same system.
 
==EI and Other Causal Metrics==
 
==EI and Other Causal Metrics==
 
EI is a metric used to measure the strength of causal connections in a causal mechanism. Before the introduction of EI, several causal metrics had already been proposed. So, what is the relationship between EI and these causal measures? As Comolatti and Hoel pointed out in their 2022 paper, many causal metrics, including EI, can be expressed as combinations of two basic elements <ref name=":0">Comolatti, R., & Hoel, E. (2022). Causal emergence is widespread across measures of causation. ''arXiv preprint arXiv:2202.01854''.</ref>. These two basic elements are called "Causal Primitives", which represent '''Sufficiency''' and '''Necessity''' and in causal relationships.
 
EI is a metric used to measure the strength of causal connections in a causal mechanism. Before the introduction of EI, several causal metrics had already been proposed. So, what is the relationship between EI and these causal measures? As Comolatti and Hoel pointed out in their 2022 paper, many causal metrics, including EI, can be expressed as combinations of two basic elements <ref name=":0">Comolatti, R., & Hoel, E. (2022). Causal emergence is widespread across measures of causation. ''arXiv preprint arXiv:2202.01854''.</ref>. These two basic elements are called "Causal Primitives", which represent '''Sufficiency''' and '''Necessity''' and in causal relationships.
第1,108行: 第1,105行:  
</math>
 
</math>
   −
<nowiki> Among them, [math]M=\frac{P+Q}{2}=\frac{1}{2}\sum_{x\in\mathcal{X}}\left[P(x)+Q(x)\right][/math] is the average distribution of P and Q, and [math]D_{KL}[/math] is</nowiki>[[KL Divergence]].
+
<nowiki> Among them, [math]M=\frac{P+Q}{2}=\frac{1}{2}\sum_{x\in\mathcal{X}}\left[P(x)+Q(x)\right][/math] is the average distribution of P and Q, and [math]D_{KL}[/math] is </nowiki>[[KL Divergence]].
    
Compared with [[KL Divergence]], [[JS Divergence]] is a symmetric measure, i.e. [math]JSD(P||Q)=JSD(Q||P)[/math], while KL divergence is asymmetric.
 
Compared with [[KL Divergence]], [[JS Divergence]] is a symmetric measure, i.e. [math]JSD(P||Q)=JSD(Q||P)[/math], while KL divergence is asymmetric.
2,365

个编辑