第173行: |
第173行: |
| To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function | | To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function |
| | | |
− | 回答一个介入性的问题,比如“既然我们把草弄湿了,那么下雨的可能性有多大? ”答案取决于干预后的联合分配函数 | + | 回答一个介入性的问题,比如“既然我们把草弄湿了,那么下雨的可能性有多大? ”答案取决于干预后的'''<font color="#ff8000">联合分布函数 Joint distribution function</font>''' |
− | | |
| | | |
| | | |
第259行: |
第258行: |
| To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus" and test whether all do terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data. | | To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus" and test whether all do terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data. |
| | | |
− | 为了确定一个因果关系是否可以从一个任意的含有未观测变量的贝氏网路中识别出来,我们可以使用“ do-calculus”的三个规则来检验是否所有的 do 项都可以从这个关系的表达式中去掉,从而确认所需的量是可以从频率数据中估计出来的。
| + | 为了确定一个因果关系是否可以从一个任意的含有未观测变量的'''<font color="#ff8000"> 贝氏网络Bayesian network</font>'''中识别出来,我们可以使用“ do-calculus”的三个规则来检验是否所有的 do 项都可以从这个关系的表达式中去掉,从而确认所需的量是可以从频率数据中估计出来的。 |
| | | |
| | | |
第267行: |
第266行: |
| Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for <math>2^{10} = 1024</math> values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most <math>10\cdot2^3 = 80</math> values. | | Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for <math>2^{10} = 1024</math> values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most <math>10\cdot2^3 = 80</math> values. |
| | | |
− | 如果依赖关系在联合分布中是稀疏的,那么在详尽的概率表上使用贝氏网路分布可以节省相当大的内存。例如,将10个二值变量的条件概率存储为一个表的简单方法需要存储 math 2 ^ {10}1024 / math 值。如果没有变量的局部分布依赖于3个以上的父变量,那么贝氏网路表示最多只存储 math 10 cdot2 ^ 380 / math 值。 | + | 如果依赖关系在联合分布中是稀疏的,那么在详尽的概率表上使用贝氏网路分布可以节省相当大的内存。例如,将10个二值变量的条件概率存储为一个表的简单方法需要存储 math 2 ^ {10}1024 / math 值。如果没有变量的局部分布依赖于3个以上的父变量,那么'''<font color="#ff8000"> 贝氏网络Bayesian network</font>'''表示最多只存储 math 10 cdot2 ^ 380 / math 值。 |
| | | |
| | | |
第275行: |
第274行: |
| One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions. | | One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions. |
| | | |
− | 贝叶斯网络的一个优点是它比完全联合分布更易于人类直观地理解(一组稀疏的)直接依赖关系和局部分布。
| + | '''<font color="#ff8000"> 贝氏网络Bayesian networks</font>'''的一个优点是它比'''<font color="#ff8000"> 完全联合分布Complete joint distributions</font>'''更易于人类直观地理解(一组稀疏的)直接依赖关系和局部分布。 |
− | | |
− | | |
| | | |
| ==Inference and learning推论与学习== | | ==Inference and learning推论与学习== |