更改

跳到导航 跳到搜索
删除103字节 、 2020年10月28日 (三) 10:35
第187行: 第187行:  
Bayesian networks perform three main inference tasks:
 
Bayesian networks perform three main inference tasks:
   −
'''<font color="#ff8000"> 贝叶斯网络Bayesian networks</font>'''执行三个主要推理任务:
+
贝叶斯网络的推理和学习主要包含三个任务:
         −
===Inferring unobserved variables预测隐含变量===
+
====推断未观测变量===
    
Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (the ''evidence'' variables) are observed. This process of computing the ''posterior'' distribution of variables given evidence is called probabilistic inference. The posterior gives a universal [[sufficient statistic]] for detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applying [[Bayes' theorem]] to complex problems.
 
Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (the ''evidence'' variables) are observed. This process of computing the ''posterior'' distribution of variables given evidence is called probabilistic inference. The posterior gives a universal [[sufficient statistic]] for detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applying [[Bayes' theorem]] to complex problems.
第209行: 第209行:       −
===Parameter learning参数预测===
+
===参数学习===
    
In order to fully specify the Bayesian network and thus fully represent the [[joint probability distribution]], it is necessary to specify for each node ''X'' the probability distribution for ''X'' conditional upon ''X'''s parents. The distribution of ''X'' conditional upon its parents may have any form. It is common to work with discrete or [[normal distribution|Gaussian distributions]] since that simplifies calculations. Sometimes only constraints on a distribution are known; one can then use the [[principle of maximum entropy]] to determine a single distribution, the one with the greatest [[information entropy|entropy]] given the constraints. (Analogously, in the specific context of a [[dynamic Bayesian network]], the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize the [[entropy rate]] of the implied stochastic process.)
 
In order to fully specify the Bayesian network and thus fully represent the [[joint probability distribution]], it is necessary to specify for each node ''X'' the probability distribution for ''X'' conditional upon ''X'''s parents. The distribution of ''X'' conditional upon its parents may have any form. It is common to work with discrete or [[normal distribution|Gaussian distributions]] since that simplifies calculations. Sometimes only constraints on a distribution are known; one can then use the [[principle of maximum entropy]] to determine a single distribution, the one with the greatest [[information entropy|entropy]] given the constraints. (Analogously, in the specific context of a [[dynamic Bayesian network]], the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize the [[entropy rate]] of the implied stochastic process.)
第235行: 第235行:       −
===Structure learning结构预测===
+
===结构学习===
     
370

个编辑

导航菜单