第843行: |
第843行: |
| | | |
| Here, σi represents the mean squared error (MSE) for the i-th dimension of the neural network. The inverse of this matrix is denoted as <code>sigmas_matrix</code>. The mapping function f is represented by <code>func</code>. The following code can be used to calculate the EI for this neural network. The basic idea of the algorithm is to use the Monte Carlo method to calculate the integral in Equation 6. | | Here, σi represents the mean squared error (MSE) for the i-th dimension of the neural network. The inverse of this matrix is denoted as <code>sigmas_matrix</code>. The mapping function f is represented by <code>func</code>. The following code can be used to calculate the EI for this neural network. The basic idea of the algorithm is to use the Monte Carlo method to calculate the integral in Equation 6. |
− | *输入变量: | + | *Input Variables: |
− | 神经网络的输入维度(input_size)输出维数(output_size)、输出变量视为高斯分布后的协方差矩阵的逆(sigmas_matrix)、映射函数(func)、干预区间大小(L的取值)、以及蒙特卡罗积分的样本数(num_samples)。
| + | input_size: Dimension of the neural network's input |
− | *输出变量: | + | output_size: Dimension of the output |
− | 维度平均EI(d_EI)、EI系数(eff)、EI值(EI)、确定性(term1)、简并性(term2)和[math]\ln L[/math](-np.log(rho))。
| + | sigmas_matrix: Inverse of the covariance matrix of the output, assumed to follow a Gaussian distribution |
− | | + | func: Mapping function |
| + | L: Size of the intervention interval |
| + | num_samples: Number of samples for the Monte Carlo integration |
| + | *Output Variables: |
| + | d_EI: Dimension-averaged EI |
| + | eff: EI coefficient |
| + | EI: Effective Information |
| + | term1: Determinism |
| + | term2: Degeneracy |
| + | [math]\ln L[/math](-np.log(rho)) |
| <syntaxhighlight lang="python3"> | | <syntaxhighlight lang="python3"> |
| def approx_ei(input_size, output_size, sigmas_matrix, func, num_samples, L, easy=True, device=None): | | def approx_ei(input_size, output_size, sigmas_matrix, func, num_samples, L, easy=True, device=None): |
第895行: |
第904行: |
| | | |
| return d_EI, eff, EI, term1, term2, -np.log(rho) | | return d_EI, eff, EI, term1, term2, -np.log(rho) |
− | </syntaxhighlight>对于随机迭代系统,由于是解析解,相对会简单许多。既可以直接计算,也可以作为上述神经网络计算结果的对照。输入的变量分别为参数矩阵和协方差矩阵。<syntaxhighlight lang="python3"> | + | </syntaxhighlight>For stochastic iterative systems, since an analytical solution is available, the calculations are simpler. They can either be computed directly or used as a comparison for the results obtained from the neural network. The input variables include the parameter matrix and the covariance matrix.<syntaxhighlight lang="python3"> |
| def EI_calculate_SIS(A,Sigma): | | def EI_calculate_SIS(A,Sigma): |
| n=A.shape[0] | | n=A.shape[0] |