第878行: |
第878行: |
| 注意,上述结论都要求:<math>\partial_{x}f(x)</math>不为0,而对于所有的<math>x</math>,<math>\partial_{x}f(x)</math>处处为0时,我们得到: | | 注意,上述结论都要求:<math>\partial_{x}f(x)</math>不为0,而对于所有的<math>x</math>,<math>\partial_{x}f(x)</math>处处为0时,我们得到: |
| <math>\begin{gathered}EI(f)\approx\end{gathered}0</math>。对于更一般的情形,则需要考虑矩阵不满秩的情况。 | | <math>\begin{gathered}EI(f)\approx\end{gathered}0</math>。对于更一般的情形,则需要考虑矩阵不满秩的情况。 |
| + | |
| + | == 连续系统EI的源代码 == |
| + | 可逆神经网络下的数值解以及随机迭代系统的解析解都可以给出计算方法。 |
| + | |
| + | 对于可逆神经网络,主要基于高斯神经网络上的变量进行计算 |
| + | |
| + | 输入变量包括神经网络的输入维度、输出维数、输出变量视为高斯分布后的协方差矩阵的逆、映射函数、干预区间大小、以及蒙特卡罗积分的样本数。输出变量包括维度平均EI、EI系数、确定性、简并性等。<syntaxhighlight lang="python3"> |
| + | def approx_ei(input_size, output_size, sigmas_matrix, func, num_samples, L, easy=True, device=None): |
| + | |
| + | rho=1/(2*L)**input_size #the density of X even distribution |
| + | |
| + | #the first term of EI, the entropy of a gaussian distribution |
| + | #sigmas_matrix_np=sigmas_matrix.cpu() if use_cuda else sigmas_matrix |
| + | dett=1.0 |
| + | if easy: |
| + | dd = torch.diag(sigmas_matrix) |
| + | dett = torch.log(dd).sum() |
| + | else: |
| + | #dett = np.log(np.linalg.det(sigmas_matrix_np)) |
| + | dett = torch.log(torch.linalg.det(sigmas_matrix)) |
| + | term1 = - (output_size + output_size * np.log(2*np.pi) + dett)/2 |
| + | |
| + | #sampling x on the space [-L,L]^n, n is the number of samples |
| + | xx=L*2*(torch.rand(num_samples, input_size, device=sigmas_matrix.device)-1/2) |
| + | |
| + | dets = 0 |
| + | logdets = 0 |
| + | |
| + | #iterate all samples of x |
| + | for i in range(xx.size()[0]): |
| + | jac=jacobian(func, xx[i,:]) #use pytorch's jacobian function to obtain jacobian matrix |
| + | det=torch.abs(torch.det(jac)) #calculate the determinate of the jacobian matrix |
| + | dets += det.item() |
| + | if det!=0: |
| + | logdets+=torch.log(det).item() #log jacobian |
| + | else: |
| + | #if det==0 then, it becomes a gaussian integration |
| + | logdet = -(output_size+output_size*np.log(2*np.pi)+dett) |
| + | logdets+=logdet.item() |
| + | |
| + | int_jacobian = logdets / xx.size()[0] #take average of log jacobian |
| + | |
| + | term2 = -np.log(rho) + int_jacobian # derive the 2nd term |
| + | |
| + | if dets==0: |
| + | term2 = - term1 |
| + | EI = max(term1 + term2, 0) |
| + | if torch.is_tensor(EI): |
| + | EI = EI.item() |
| + | eff = -EI / np.log(rho) |
| + | d_EI = EI/output_size |
| + | |
| + | return d_EI, eff, EI, term1, term2, -np.log(rho) |
| + | </syntaxhighlight>对于随机迭代系统,由于是解析解,相对会简单许多。既可以直接计算,也可以作为上述神经网络计算结果的对照。输入的变量分别为参数矩阵和协方差矩阵。<syntaxhighlight lang="python3"> |
| + | def EI_calculate_SIS(A,Sigma): |
| + | n=A.shape[0] |
| + | return math.log(abs(sl.det(A))/(np.sqrt(sl.det(Sigma))*math.pow(2*np.pi*np.e,n/2))) |
| + | </syntaxhighlight> |
| | | |
| =EI与其它相关主题= | | =EI与其它相关主题= |