更改

跳到导航 跳到搜索
添加2字节 、 2020年10月29日 (四) 10:29
第280行: 第280行:  
An alternative method of structural learning uses optimization-based search. It requires a scoring function and a search strategy. A common scoring function is posterior probability of the structure given the training data, like the BIC or the BDeu. The time requirement of an exhaustive search returning a structure that maximizes the score is superexponential in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima. Friedman et al. discuss using mutual information between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to k nodes and exhaustively searching therein.
 
An alternative method of structural learning uses optimization-based search. It requires a scoring function and a search strategy. A common scoring function is posterior probability of the structure given the training data, like the BIC or the BDeu. The time requirement of an exhaustive search returning a structure that maximizes the score is superexponential in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima. Friedman et al. discuss using mutual information between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to k nodes and exhaustively searching therein.
   −
另一种结构学习方法使用基于优化的搜索。它需要一个评分函数和一个搜索策略。一个常见的评分函数是给定训练数据的结构'''<font color="#ff8000"> '后验概率Posterior probability</font>'',比如 BIC 或 BDeu。穷举搜索返回最大化得分结构的时间要求在变量数上是超指数的。局部搜索策略进行增量改变,目的是改进结构的得分。像马尔科夫蒙特卡洛这样的全局搜索算法可以避免陷入局部极小。弗里德曼等人。讨论使用变量之间的互信息,并找到一个结构,最大化这一点。他们通过将父候选节点集限制为 k 节点并在其中进行穷尽式搜索来实现这一点。
+
另一种结构学习的方法是基于优化的搜索。它需要一个评分函数和一个搜索策略。一个常见的评分函数是给定训练数据的结构的后验概率,比如 BIC 或 BDeu。穷举搜索可以获得能最大化得分函数的结构,但其时间要求随变量数增长呈超指数增长。局部搜索策略则进行增量改变,一步步迭代式改进结构的得分。像马尔可夫链蒙特卡洛这样的全局搜索算法可以避免陷入局部最优。弗里德曼等人认为可以使用变量之间的互信息,并找到一个讷讷感最大化互信息的结构,方法是通过将候选的父选节点集限制为 k 个节点并在其中进行穷尽式搜索。
      第288行: 第288行:  
A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it using integer programming. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Such method can handle problems with up to 100 variables.
 
A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it using integer programming. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Such method can handle problems with up to 100 variables.
   −
一个特别快速的方法为精确的 BN 学习是铸造的问题作为一个最佳化问题,并解决它使用整数规划。以切割面的形式求解整数规划问题时,在整数规划中加入了不规则性约束。这种方法可以处理多达100个变量的问题。
+
一个特别快速精确学习出贝叶斯网络的方法是把问题转化为一个最优化问题,并使用整数规划解决它。以切平面的形式求解整数规划问题时,在整数规划中加入了不规则性约束,这种方法可以处理多达100个变量的问题。
      第296行: 第296行:  
In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.
 
In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.
   −
为了处理成千上万个变量的问题,一种不同的方法是必要的。一种是先对一种有序化进行取样,然后根据这种有序化找到最优的氮化硼结构。这意味着可能排序的搜索空间比网络结构的搜索空间小,因而方便。然后对多重排序进行采样和评估。这种方法已被证明在所有文献中变量数量巨大的情况下是最佳的。
+
处理成千上万个变量的问题时,则要用到不同的方。一种方法是采样出一个节点的(拓扑)排序,然后根据这种排序找到最优的网络结构。这意味着搜索工作只需要在包含可能排序的空间中进行,这比整个网络结构的搜索空间小,因而更加方便。可以进行多次采样和评估,这种方法在现有文献中已被证明在变量数量巨大的情况下是最佳的。
      第304行: 第304行:  
Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables.
 
Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables.
   −
另一种方法是将重点放在可分解模型的子类上,其最大似然估计具有封闭形式。这样就有可能为数百个变量发现一致的结构。
+
另一种方法是将重点放在具有可分解模型的子类上,其最大似然估计具有封闭形式。这样就有可能为数百个变量发现一致的结构。
      第312行: 第312行:  
Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use K-tree for effective learning.
 
Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use K-tree for effective learning.
   −
学习树宽有限的'''<font color="#ff8000"> 贝叶斯网络Bayesian networks</font>'''是必要的,以允许精确的,易于处理的推理,因为最坏情况下的推理复杂度是指数树型k(在指数时间假设下)。然而,作为图表的全局属性,它大大增加了学习过程的难度。在这种情况下,可以使用 k 树进行有效的学习。
+
学习树宽有限的贝叶斯网络对于精确的,可追溯的推理是必要的,因为最坏情况下的推理复杂度也只是树宽 k 的指数级(在指数时间假设下)。然而,作为图表的全局属性,它大大增加了学习过程的难度。在这种情况下,可以使用 K 树进行有效的学习。
    
==Statistical introduction统计简介==
 
==Statistical introduction统计简介==
370

个编辑

导航菜单