第87行: |
第87行: |
| --[[用户:ZC|ZC]]([[用户讨论:ZC|讨论]]) 【审校】“计算出的变量”改为“计算变量” | | --[[用户:ZC|ZC]]([[用户讨论:ZC|讨论]]) 【审校】“计算出的变量”改为“计算变量” |
| | | |
− | ===Noise models噪音模型=== | + | ===Noise models 噪音模型=== |
| | | |
| | | |
第99行: |
第99行: |
| Here are some of the noise models for the hypothesis Y → X with the noise E: | | Here are some of the noise models for the hypothesis Y → X with the noise E: |
| | | |
− | 下面是一些支持 Y → X 假设且具有噪声 E 的噪声模型:
| + | 下面是一些假设 Y → X 且具有噪声 E 的噪声模型: |
− | * Additive noise:<ref>Hoyer, Patrik O., et al. "[https://papers.nips.cc/paper/3548-nonlinear-causal-discovery-with-additive-noise-models.pdf Nonlinear causal discovery with additive noise models]." NIPS. Vol. 21. 2008.</ref> <math>Y = F(X)+E</math>
| |
− | * '''<font color='#ff8000>加法噪音Additive noise</font>'''
| |
| | | |
− | * Linear noise:<ref>{{cite journal | last1 = Shimizu | first1 = Shohei | display-authors = etal | year = 2011 | title = DirectLiNGAM: A direct method for learning a linear non-Gaussian structural equation model | url = http://www.jmlr.org/papers/volume12/shimizu11a/shimizu11a.pdf | journal = The Journal of Machine Learning Research | volume = 12 | issue = | pages = 1225–1248 }}</ref> <math>Y = pX + qE</math> | + | * '''<font color='#ff8000'>加性噪声 Additive noise</font>''':<ref>Hoyer, Patrik O., et al. "[https://papers.nips.cc/paper/3548-nonlinear-causal-discovery-with-additive-noise-models.pdf Nonlinear causal discovery with additive noise models]." NIPS. Vol. 21. 2008.</ref> <math>Y = F(X)+E</math> |
− | * '''<font color='#ff8000>线性噪音Linear noise</font>''' | + | * '''<font color='#ff8000'>线性噪声 Linear noise</font>''':<ref>{{cite journal | last1 = Shimizu | first1 = Shohei | display-authors = etal | year = 2011 | title = DirectLiNGAM: A direct method for learning a linear non-Gaussian structural equation model | url = http://www.jmlr.org/papers/volume12/shimizu11a/shimizu11a.pdf | journal = The Journal of Machine Learning Research | volume = 12 | issue = | pages = 1225–1248 }}</ref> <math>Y = pX + qE</math> |
| + | * '''<font color='#ff8000'>非线性后置 Post-non-linear</font>''':<ref>Zhang, Kun, and Aapo Hyvärinen. "[https://arxiv.org/pdf/1205.2599 On the identifiability of the post-nonlinear causal model]." Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2009.</ref> <math>Y = G(F(X)+E)</math> |
| + | * '''<font color='#ff8000'>异方差噪声 Heteroskedastic noise</font>''':<math>Y = F(X)+E.G(X)</math> |
| + | * '''<font color='#ff8000'>功能性噪声 Functional noise</font>''':<ref name="Mooij">Mooij, Joris M., et al. "[http://papers.nips.cc/paper/4173-probabilistic-latent-variable-models-for-distinguishing-between-cause-and-effect.pdf Probabilistic latent variable models for distinguishing between cause and effect]." NIPS. 2010.</ref> <math>Y = F(X,E)</math> |
| | | |
− | * Post-non-linear:<ref>Zhang, Kun, and Aapo Hyvärinen. "[https://arxiv.org/pdf/1205.2599 On the identifiability of the post-nonlinear causal model]." Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2009.</ref> <math>Y = G(F(X)+E)</math>
| + | 上述模型基于均基于以下假设: |
− | * '''<font color='#ff8000>后非线性Post-non-linear(噪音)</font>''' | + | * <font color='#ff8000'>Y 不存在其他影响原因 There are no other causes of Y</font> 。 |
| | | |
− | * Heteroskedastic noise: <math>Y = F(X)+E.G(X)</math> | + | * <font color='#ff8000'>X 和 E 不存在共同的影响原因 X and E have no common causes</font> 。 |
− | * '''<font color='#ff8000>异方差噪音Heteroskedastic noise</font>'''
| |
| | | |
− | * Functional noise:<ref name="Mooij">Mooij, Joris M., et al. "[http://papers.nips.cc/paper/4173-probabilistic-latent-variable-models-for-distinguishing-between-cause-and-effect.pdf Probabilistic latent variable models for distinguishing between cause and effect]." NIPS. 2010.</ref> <math>Y = F(X,E)</math> | + | * <font color='#ff8000'>原因的分布独立于因果机制 Distribution of cause is independent from causal mechanisms</font> 。 |
− | * '''<font color='#ff8000>功能性噪音Functional noise</font>'''
| |
− | | |
− | | |
− | The common assumption in these models are:
| |
− | | |
− | 这些模型的共同假设是:
| |
− | * There are no other causes of Y.
| |
− | * Y 没有其他原因。
| |
− | | |
− | * X and E have no common causes.
| |
− | * X 和 E 没有共同的原因。
| |
− | | |
− | * Distribution of cause is independent from causal mechanisms.
| |
− | * 原因的分布独立于因果机制。
| |
| | | |
| On an intuitive level, the idea is that the factorization of the joint distribution P(Cause, Effect) into P(Cause)*P(Effect | Cause) typically yields models of lower total complexity than the factorization into P(Effect)*P(Cause | Effect). Although the notion of “complexity” is intuitively appealing, it is not obvious how it should be precisely defined.<ref name="Mooij"/> A different family of methods attempt to discover causal "footprints" from large amounts of labeled data, and allow the prediction of more flexible causal relations.<ref>Lopez-Paz, David, et al. "[http://www.jmlr.org/proceedings/papers/v37/lopez-paz15.pdf Towards a learning theory of cause-effect inference]" ICML. 2015</ref> | | On an intuitive level, the idea is that the factorization of the joint distribution P(Cause, Effect) into P(Cause)*P(Effect | Cause) typically yields models of lower total complexity than the factorization into P(Effect)*P(Cause | Effect). Although the notion of “complexity” is intuitively appealing, it is not obvious how it should be precisely defined.<ref name="Mooij"/> A different family of methods attempt to discover causal "footprints" from large amounts of labeled data, and allow the prediction of more flexible causal relations.<ref>Lopez-Paz, David, et al. "[http://www.jmlr.org/proceedings/papers/v37/lopez-paz15.pdf Towards a learning theory of cause-effect inference]" ICML. 2015</ref> |