更改

跳到导航 跳到搜索
添加693字节 、 2020年10月31日 (六) 19:54
无编辑摘要
第28行: 第28行:  
Generally, probabilistic graphical models use a graph-based representation as the foundation for encoding a  distribution over a multi-dimensional space and a graph that is a compact or factorized representation of a set of independences that hold in the specific distribution. Two branches of graphical representations of distributions are commonly used, namely, Bayesian networks and Markov random fields. Both families encompass the properties of factorization and independences, but they differ in the set of independences they can encode and the factorization of the distribution that they induce.<ref name=koller09>{{cite book
 
Generally, probabilistic graphical models use a graph-based representation as the foundation for encoding a  distribution over a multi-dimensional space and a graph that is a compact or factorized representation of a set of independences that hold in the specific distribution. Two branches of graphical representations of distributions are commonly used, namely, Bayesian networks and Markov random fields. Both families encompass the properties of factorization and independences, but they differ in the set of independences they can encode and the factorization of the distribution that they induce.<ref name=koller09>{{cite book
   −
一般来说,概率图模型使用将图的表示方法作为对多维空间上的分布进行编码的基础,而图是一组独立分布的紧凑或分解表示。常用的概率图模型大致分为两类,即贝叶斯网络和马尔可夫随机场。这两个族都包含因子分解和独立性的性质,但是它们在它们可以编码的独立性集合和它们所诱导的分布的因子分解上有所不同。 09{ cite book
+
一般来说,概率图模型中图的表示方法常常作为对多维空间上的分布进行编码的基础,而图是一组独立分布的紧凑或分解表示。常用的概率图模型大致分为两类:贝叶斯网络和马尔可夫随机场。这两种都包含因子分解和独立性的性质,但是它们在它们可以编码的一系列独立性和它们所诱导的分布的因子分解上有所不同。 09{ cite book
    
  |author=Koller, D.
 
  |author=Koller, D.
第135行: 第135行:     
===Bayesian network===
 
===Bayesian network===
 
+
贝叶斯网络
 
{{main|Bayesian network}}
 
{{main|Bayesian network}}
   第144行: 第144行:  
If the network structure of the model is a directed acyclic graph, the model represents a factorization of the joint probability of all random variables.  More precisely, if the events are <math>X_1,\ldots,X_n</math> then the joint probability satisfies
 
If the network structure of the model is a directed acyclic graph, the model represents a factorization of the joint probability of all random variables.  More precisely, if the events are <math>X_1,\ldots,X_n</math> then the joint probability satisfies
   −
如果模型的网络结构是有向无环图,则模型表示所有随机变量的联合概率的因子分解。更确切地说,如果事件是数学 x1, ldots,xn / math,那么联合概率满足
+
如果模型的网络结构是有向无环图,则模型表示所有随机变量的联合概率的因子分解。更确切地说,如果事件是<math>X_1,\ldots,X_n</math>,那么联合概率满足
      第152行: 第152行:  
<math>P[X_1,\ldots,X_n]=\prod_{i=1}^nP[X_i|\text{pa}(X_i)]</math>
 
<math>P[X_1,\ldots,X_n]=\prod_{i=1}^nP[X_i|\text{pa}(X_i)]</math>
   −
数学 p [ x1, ldots,xn ] prod { i 1} nP [ xi |  text { pa }(xi)] / math
+
 
      第160行: 第160行:  
where <math>\text{pa}(X_i)</math> is the set of parents of node <math>X_i</math> (nodes with edges directed towards <math>X_i</math>).  In other words, the joint distribution factors into a product of conditional distributions. For example, the graphical model in the Figure shown above (which is actually not a directed acyclic graph, but an ancestral graph) consists of the random variables <math>A, B, C, D</math>
 
where <math>\text{pa}(X_i)</math> is the set of parents of node <math>X_i</math> (nodes with edges directed towards <math>X_i</math>).  In other words, the joint distribution factors into a product of conditional distributions. For example, the graphical model in the Figure shown above (which is actually not a directed acyclic graph, but an ancestral graph) consists of the random variables <math>A, B, C, D</math>
   −
其中 math text { pa }(xi) / math 是节点 math x i / math 的父节点集(边指向 math x i / math 的节点)。换句话说,联合分布因子成为条件分布的乘积。例如,上面图中的图形模型(实际上不是有向无环图,而是祖先的图形)由随机变量数学 a,b,c,d / math 组成
+
其中 <math>\text{pa}(X_i)</math> 是节点 <math>X_i</math> 的父节点集(节点的边际指向 <math>X_i</math> )。换句话说,联合分布因子可以表示为条件分布的乘积。例如,上图中的图模型(实际上不是有向无环图,而是原始图)是由随机变量<math>A, B, C, D</math> 组成
    
with a joint probability density that factors as
 
with a joint probability density that factors as
第166行: 第166行:  
with a joint probability density that factors as
 
with a joint probability density that factors as
   −
联合概率密度
+
联合概率密度因子为
      第174行: 第174行:  
<math>P[A,B,C,D] = P[A]\cdot P[B]\cdot P[C,D|A,B]</math>
 
<math>P[A,B,C,D] = P[A]\cdot P[B]\cdot P[C,D|A,B]</math>
   −
数学[ a,b,c,d ] p [ a ] cdot p [ b ] cdot p [ c,d | a,b ] / 数学
+
 
      第182行: 第182行:  
Any two nodes are conditionally independent given the values of their parents.  In general, any two sets of nodes are conditionally independent given a third set if a criterion called d-separation holds in the graph.  Local independences and global independences are equivalent in Bayesian networks.
 
Any two nodes are conditionally independent given the values of their parents.  In general, any two sets of nodes are conditionally independent given a third set if a criterion called d-separation holds in the graph.  Local independences and global independences are equivalent in Bayesian networks.
   −
任何两个节点都是根据其父节点的值条件独立的。一般来说,如果一个称为 d- 分离的准则在图中成立,那么给定第三个集合的任意两组节点都是条件独立的。贝叶斯网络中的局部独立性和全局独立性是等价的。
+
任何两个节点都是和其父节点的值是条件独立的。一般来说,如果 d- 分离准则在图中成立,那么任意两组节点和给定第三个节点集都是条件独立的。在贝叶斯网络中,局部独立性和全局独立性是等价的。
      第190行: 第190行:  
This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning models like hidden Markov models, neural networks and newer models such as variable-order Markov models can be considered special cases of Bayesian networks.
 
This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning models like hidden Markov models, neural networks and newer models such as variable-order Markov models can be considered special cases of Bayesian networks.
   −
这种类型的图形模型被称为有向图形模型、贝氏网路或信念网络。经典的机器学习模型如隐马尔可夫模型、神经网络和新的模型如可变阶马尔可夫模型都可以看作是贝叶斯网络的特殊情况。
+
这种类型的图形模型被称为有向图形模型、贝氏网路或信念网络。经典的机器学习模型:隐马尔可夫模型、神经网络和更新的模型如可变阶马尔可夫模型都可以看作是贝叶斯网络的特殊情况。
          
===Other types===
 
===Other types===
 +
其他类别
    
*[[Naive Bayes classifier]] where we use a tree with a single root
 
*[[Naive Bayes classifier]] where we use a tree with a single root
 
+
朴素贝叶斯分类器,其中我们会使用一颗单结点树
 
*[[Dependency network (graphical model)|Dependency network]]  where cycles are allowed
 
*[[Dependency network (graphical model)|Dependency network]]  where cycles are allowed
 
+
依赖网络中环的出现
 
*Tree-augmented classifier or '''TAN model'''
 
*Tree-augmented classifier or '''TAN model'''
 
+
树增广朴素贝叶斯分类器或简称为TAN模型
 
*A [[factor graph]] is an undirected [[bipartite graph]] connecting variables and factors. Each factor represents a function over the variables it is connected to. This is a helpful representation for understanding and implementing [[belief propagation]].
 
*A [[factor graph]] is an undirected [[bipartite graph]] connecting variables and factors. Each factor represents a function over the variables it is connected to. This is a helpful representation for understanding and implementing [[belief propagation]].
 
+
因子图是连接变量和因子的无向二分图。 每个因子代表与其连接的变量有关的函数。 这对于理解和实现信念传播算法很有帮助。
 
* A [[clique tree]] or junction tree is a [[tree (graph theory)|tree]] of [[clique (graph theory)|cliques]], used in the [[junction tree algorithm]].
 
* A [[clique tree]] or junction tree is a [[tree (graph theory)|tree]] of [[clique (graph theory)|cliques]], used in the [[junction tree algorithm]].
 +
在节点树算法中,通常会使用派系中一棵派系树或者是节点树。
 +
* A [[chain graph]] is a graph which may have both directed and undirected edges, but without any directed cycles (i.e. if we start at any vertex and move along the graph respecting the directions of any arrows, we cannot return to the vertex we started from if we have passed an arrow). Both directed acyclic graphs and undirected graphs are special cases of chain graphs, which can therefore provide a way of unifying and generalizing Bayesian and Markov networks.<ref>{{cite journal|last=Frydenberg|first=Morten|year=1990|title=The Chain Graph Markov Property|journal=[[Scandinavian Journal of Statistics]]|volume=17|issue=4|pages=333–353|mr=1096723|jstor=4616181 }}
   −
* A [[chain graph]] is a graph which may have both directed and undirected edges, but without any directed cycles (i.e. if we start at any vertex and move along the graph respecting the directions of any arrows, we cannot return to the vertex we started from if we have passed an arrow). Both directed acyclic graphs and undirected graphs are special cases of chain graphs, which can therefore provide a way of unifying and generalizing Bayesian and Markov networks.<ref>{{cite journal|last=Frydenberg|first=Morten|year=1990|title=The Chain Graph Markov Property|journal=[[Scandinavian Journal of Statistics]]|volume=17|issue=4|pages=333–353|mr=1096723|jstor=4616181 }}
+
链式图是既可以有向也可以无向的图,但是没有任何有向环(即,如果我们从任意一个顶点开始并依据任何箭头的方向沿该图形移动,则我们将无法返回到通过箭头开始的该顶点)。 有向无环图和无向图都是链式图的特例,因此可以提供一种统一和泛化贝叶斯网络和马尔可夫网络的方法。
    
</ref>
 
</ref>
第281行: 第284行:     
* [[Belief propagation]]
 
* [[Belief propagation]]
 
+
信念传递网络
 
* [[Structural equation model]]
 
* [[Structural equation model]]
 
+
结构方程模型
     
108

个编辑

导航菜单