鲁宾因果模型

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索

The Rubin causal model (RCM), also known as the Neyman–Rubin causal model,[1] is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes, named after Donald Rubin. The name "Rubin causal model" was first coined by Paul W. Holland.[2] The potential outcomes framework was first proposed by Jerzy Neyman in his 1923 Master's thesis,[3] though he discussed it only in the context of completely randomized experiments.[4] Rubin extended it into a general framework for thinking about causation in both observational and experimental studies.[1]

鲁宾因果模型 Rubin Causal Model (RCM) ,也称为 Neyman-Rubin 因果模型,[1]是一种基于潜在结果框架的因果统计分析方法,以Donald Rubin的名字命名。“鲁宾因果模型”这个名字最早是由 Paul W. Holland 创造的。 [2] 潜在结果框架 Potential Outcomes Framework最初是由 Jerzy Neyman 在他 1923 年的硕士论文中提出的,[3]尽管他只在完全随机实验的背景下讨论了它。 [4]鲁宾将其扩展为在观察性和实验性研究中思考因果关系的一般框架。[1]

介绍

The Rubin causal model is based on the idea of potential outcomes. For example, a person would have a particular income at age 40 if he had attended college, whereas he would have a different income at age 40 if he had not attended college. To measure the causal effect of going to college for this person, we need to compare the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. This dilemma is the "fundamental problem of causal inference".

Because of the fundamental problem of causal inference, unit-level causal effects cannot be directly observed. However, randomized experiments allow for the estimation of population-level causal effects.[5] A randomized experiment assigns people randomly to treatments: college or no college. Because of this random assignment, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups. An estimate of the average causal effect (also referred to as the average treatment effect) can then be obtained by computing the difference in means between the treated (college-attending) and control (not-college-attending) samples.

In many circumstances, however, randomized experiments are not possible due to ethical or practical concerns. In such scenarios there is a non-random assignment mechanism. This is the case for the example of college attendance: people are not randomly assigned to attend college. Rather, people may choose to attend college based on their financial situation, parents' education, and so on. Many statistical methods have been developed for causal inference, such as propensity score matching. These methods attempt to correct for the assignment mechanism by finding control units similar to treatment units.

鲁宾因果模型基于潜在结果的想法。例如,如果一个人上过大学,他在 40 岁时会有特定的收入,而如果他没有上过大学,他在 40 岁时会有不同的收入。为了衡量这个人上大学的因果效应,我们需要比较同一个人在两种不同的未来中的结果。由于不可能同时看到两种潜在结果,因此总是缺少其中一种潜在结果。这种困境就是“因果推理的基本问题”。

由于因果推理的根本问题,无法直接观察到单元级别的因果效应。然而,随机实验允许估计人口水平的因果效应。[5]随机实验将人们随机分配到对照组:大学或非大学。由于这种随机分配,各组(平均)相等,40 岁时的收入差异可归因于大学分配,因为这是各组之间的唯一差异。然后可以通过计算处理(上大学)和对照(非上大学)样本之间的平均值差异来获得平均因果效应(也称为平均处理效应)的估计值。

然而,在许多情况下,由于伦理或实际问题,随机实验是不可能的。在这种情况下,存在非随机分配机制。上大学的例子就是这种情况:人们不是随机分配上大学的。相反,人们可能会根据他们的经济状况、父母的教育等来选择上大学。已经开发了许多用于因果推断的统计方法,例如倾向得分匹配。这些方法试图通过寻找类似于处理单元的控制单元来纠正分配机制。

一个扩展案例

Rubin defines a causal effect:

Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from {\displaystyle t_{1}}t_{1} to {\displaystyle t_{2}}t_{2} is the difference between what would have happened at time {\displaystyle t_{2}}t_{2} if the unit had been exposed to E initiated at {\displaystyle t_{1}}t_{1} and what would have happened at {\displaystyle t_{2}}t_{2} if the unit had been exposed to C initiated at {\displaystyle t_{1}}t_{1}: 'If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,' or 'because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.' Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning."[5]

According to the RCM, the causal effect of your taking or not taking aspirin one hour ago is the difference between how your head would have felt in case 1 (taking the aspirin) and case 2 (not taking the aspirin). If your headache would remain without aspirin but disappear if you took aspirin, then the causal effect of taking aspirin is headache relief. In most circumstances, we are interested in comparing two futures, one generally termed "treatment" and the other "control". These labels are somewhat arbitrary.

鲁宾定义了一个因果效应:

直观地说,一种处理 E 对另一种处理 C 的因果效应对于特定单位和从 [math]\displaystyle{ t_{1} }[/math][math]\displaystyle{ t_{2} }[/math] 是当时会发生的事情之间的差异 [math]\displaystyle{ t_{2} }[/math] 如果装置暴露于 E 开始于 [math]\displaystyle{ t_{1} }[/math] 以及会发生什么 [math]\displaystyle{ t_{2} }[/math] 如果该装置已暴露于 C 开始于[math]\displaystyle{ t_{1} }[/math]:“如果一个小时前我服用了两个阿司匹林而不是一杯水,我的头痛现在就会消失”,或者“因为一个小时前我服用了两个阿司匹林而不是一杯水,我的头痛现在消失了.' 我们对 E 与 C 处理的因果效应的定义将反映这种直观的含义。[5]

根据 RCM,您在一小时前服用或不服用阿司匹林的因果效应是您的头部在情况 1(服用阿司匹林)和情况 2(未服用阿司匹林)中的感受的差异。如果没有阿司匹林你的头痛仍然存在,但如果服用阿司匹林头痛就会消失,那么服用阿司匹林的因果效应是头痛缓解。在大多数情况下,我们对比较两种特性感兴趣,一种通常称为“处理”,另一种称为“控制”。这些标签有些随意。

潜在结果

Potential outcomes

Suppose that Joe is participating in an FDA test for a new hypertension drug. If we were omniscient, we would know the outcomes for Joe under both treatment (the new drug) and control (either no treatment or the current standard treatment). The causal effect, or treatment effect, is the difference between these two potential outcomes.

假设 Joe 正在参加 FDA 对一种新的高血压药物的测试。如果我们无所不知,我们就会知道乔在处理t(新药)和控制c(未处理或当前标准处理)下的结果。因果效应或处理效应是这两种潜在结果之间的差异。

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 135 −5

Y_{t}(u) is Joe's blood pressure if he takes the new pill. In general, this notation expresses the potential outcome which results from a treatment, t, on a unit, u. Similarly, {\displaystyle Y_{c}(u)}Y_{c}(u) is the effect of a different treatment, c or control, on a unit, u. In this case, {\displaystyle Y_{c}(u)}Y_{c}(u) is Joe's blood pressure if he doesn't take the pill. {\displaystyle Y_{t}(u)-Y_{c}(u)}Y_{t}(u)-Y_{c}(u) is the causal effect of taking the new drug.

From this table we only know the causal effect on Joe. Everyone else in the study might have an increase in blood pressure if they take the pill. However, regardless of what the causal effect is for the other subjects, the causal effect for Joe is lower blood pressure, relative to what his blood pressure would have been if he had not taken the pill.

Consider a larger sample of patients:

如果乔服用新药丸,[math]\displaystyle{ Y_{t}(u) }[/math]就是他的血压。通常,该符号表示对单位u进行处理t 所产生的潜在结果。类似的,[math]\displaystyle{ Y_{c}(u) }[/math]是不同处理t或控制c对单元u的影响。在这种情况下,如果乔不服药,[math]\displaystyle{ Y_{c}(u) }[/math]就是他的血压。[math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math] 是服用新药的因果效应。

从这张表中我们只知道对 Joe 的因果效应。研究中的其他人如果服用药物,血压可能会升高。然而,不管对其他受试者的因果效应是什么,乔的因果效应是血压降低(相对于他没有服用避孕药时的血压)。

考虑更大的患者样本:

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 135 −5
玛丽 140 150 −10
莎莉 135 125 10
鲍勃 135 150 −15

The causal effect is different for every subject, but the drug works for Joe, Mary and Bob because the causal effect is negative. Their blood pressure is lower with the drug than it would have been if each did not take the drug. For Sally, on the other hand, the drug causes an increase in blood pressure.

In order for a potential outcome to make sense, it must be possible, at least a priori. For example, if there is no way for Joe, under any circumstance, to obtain the new drug, then {\displaystyle Y_{t}(u)}Y_{t}(u) is impossible for him. It can never happen. And if {\displaystyle Y_{t}(u)}Y_{t}(u) can never be observed, even in theory, then the causal effect of treatment on Joe's blood pressure is not defined.

因果效应对于每一个主题是不同的,但药物对乔,玛丽和鲍勃都有影响,因为因果效应是负的。他们服用药物后的血压低于每个人不服用药物时的血压。另一方面,对于 Sally 来说,这种药物会导致血压升高。

为了使潜在结果有意义,它必须是可能的,至少是先验已知可能的。例如,如果乔在任何情况下都无法获得新药,那么[math]\displaystyle{ Y_{t}(u) }[/math]对他来说是不可能的。它永远不会发生。而如果[math]\displaystyle{ Y_{t}(u) }[/math]永远无法观察到(即使在理论上),那么对乔的血压的因果效应是不确定的。

没有操纵就没有因果关系

No causation without manipulation

The causal effect of new drug is well defined because it is the simple difference of two potential outcomes, both of which might happen. In this case, we (or something else) can manipulate the world, at least conceptually, so that it is possible that one thing or a different thing might happen.

This definition of causal effects becomes much more problematic if there is no way for one of the potential outcomes to happen, ever. For example, what is the causal effect of Joe's height on his weight? Naively, this seems similar to our other examples. We just need to compare two potential outcomes: what would Joe's weight be under the treatment (where treatment is defined as being 3 inches taller) and what would Joe's weight be under the control (where control is defined as his current height).

A moment's reflection highlights the problem: we can't increase Joe's height. There is no way to observe, even conceptually, what Joe's weight would be if he were taller because there is no way to make him taller. We can't manipulate Joe's height, so it makes no sense to investigate the causal effect of height on weight. Hence the slogan: No causation without manipulation.

新药的因果效应是明确定义的,因为它是两种可能发生的潜在结果的简单差异。在这种情况下,我们(或其他事物)可以干预世界,至少在概念上是这样,因此可能会发生不同的事。

如果永远不可能发生其中一种潜在结果,那么这种因果效应的定义就会变得更加棘手。例如,乔的身高对他的体重有什么因果关系?这似乎与我们的其他示例相似。我们只需要比较两个潜在的结果:Joe 在处理下的体重(处理被定义为高3英寸)和 Joe 在控制下的体重(控制被定义为他当前的身高)。

问题在于:我们无法增加乔的身高。没有办法观察如果乔更高,他的体重会是多少,因为没有办法让他更高。我们无法操纵乔的身高,因此调查身高对体重的因果关系毫无意义。因此有一个口号:没有操纵就没有因果关系。

稳定单元处理值假设 (SUTVA)

Stable unit treatment value assumption (SUTVA)

See also: Spillover (experiment)

We require that "the [potential outcome] observation on one unit should be unaffected by the particular assignment of treatments to the other units" (Cox 1958, §2.4). This is called the stable unit treatment value assumption (SUTVA), which goes beyond the concept of independence.

In the context of our example, Joe's blood pressure should not depend on whether or not Mary receives the drug. But what if it does? Suppose that Joe and Mary live in the same house and Mary always cooks. The drug causes Mary to crave salty foods, so if she takes the drug she will cook with more salt than she would have otherwise. A high salt diet increases Joe's blood pressure. Therefore, his outcome will depend on both which treatment he received and which treatment Mary receives.

SUTVA violation makes causal inference more difficult. We can account for dependent observations by considering more treatments. We create 4 treatments by taking into account whether or not Mary receives treatment.

我们要求“对一个单元的 [潜在结果] 观察不应受到其他单元的特定处理分配的影响”(Cox 1958,第 2.4 节)。这被称为稳定单元处理值假设(SUTVA),它超越了独立性的概念。

在我们的例子中,Joe 的血压不应该取决于 Mary 是否接受了药物。但如果真的发生了呢?假设乔和玛丽住在同一所房子里,玛丽总是做饭。这种药物会导致玛丽渴望咸的食物,所以如果她服用这种药物,她会用比其他情况下更多的盐来烹饪。高盐饮食会增加乔的血压。因此,他的结果将取决于他接受的处理和玛丽接受的处理。

在不满足SUTVA的情况下,因果推断会更加困难。我们可以通过考虑更多的处理来解释相关的观察结果。我们通过考虑 Mary 是否接受处理来创建 4 个处理。


主题 乔 = c,玛丽 = t 乔 = t,玛丽 = t 乔 = c,玛丽 = c 乔 = t,玛丽 = c
140 130 125 120

Recall that a causal effect is defined as the difference between two potential outcomes. In this case, there are multiple causal effects because there are more than two potential outcomes. One is the causal effect of the drug on Joe when Mary receives treatment and is calculated, {\displaystyle 130-140}{\displaystyle 130-140}. Another is the causal effect on Joe when Mary does not receive treatment and is calculated {\displaystyle 120-125}{\displaystyle 120-125}. The third is the causal effect of Mary's treatment on Joe when Joe is not treated. This is calculated as {\displaystyle 140-125}{\displaystyle 140-125}. The treatment Mary receives has a greater causal effect on Joe than the treatment which Joe received has on Joe, and it is in the opposite direction.

By considering more potential outcomes in this way, we can cause SUTVA to hold. However, if any units other than Joe are dependent on Mary, then we must consider further potential outcomes. The greater the number of dependent units, the more potential outcomes we must consider and the more complex the calculations become (consider an experiment with 20 different people, each of whom's treatment status can effect outcomes for every one else). In order to (easily) estimate the causal effect of a single treatment relative to a control, SUTVA should hold.

回想一下,因果效应被定义为两个潜在结果之间的差异。在这种情况下,存在多种因果效应,因为存在两个以上的潜在结果。一是玛丽接受处理时药物对乔的因果效应[math]\displaystyle{ 130-140 }[/math]。另一个是当玛丽没有接受处理时对乔的因果效应[math]\displaystyle{ 120-125 }[/math]。第三是在乔没有得到处理的情况下,玛丽的处理对乔的因果效应[math]\displaystyle{ 140-125 }[/math]。Mary 接受的处理对 Joe 的因果影响比 Joe 接受的处理对 Joe 的影响更大,而且是相反的方向。

通过以这种方式考虑更多潜在结果,我们可以使SUTVA成立。但是,如果 Joe 以外的任何单位都依赖于 Mary,那么我们必须考虑进一步的潜在结果。依赖单位的数量越多,我们必须考虑的潜在结果就越多,计算也变得越复杂(考虑对不同的20个人进行的实验,每个人的处理状态都会影响其他人的结果)。为了(轻松)估计单一处理相对于对照的因果效应,SUTVA 应该成立。

平均因果效应(Average Causal Effect, ACE)

考虑:

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 135 −5
玛丽 130 145 −15
莎莉 130 145 −15
鲍勃 140 150 −10
詹姆士 145 140 +5
平均 135 143 −8

One may calculate the average causal effect by taking the mean of all the causal effects.

How we measure the response affects what inferences we draw. Suppose that we measure changes in blood pressure as a percentage change rather than in absolute values. Then, depending in the exact numbers, the average causal effect might be an increase in blood pressure. For example, assume that George's blood pressure would be 154 under control and 140 with treatment. The absolute size of the causal effect is −14, but the percentage difference (in terms of the treatment level of 140) is −10%. If Sarah's blood pressure is 200 under treatment and 184 under control, then the causal effect in 16 in absolute terms but 8% in terms of the treatment value. A smaller absolute change in blood pressure (−14 versus 16) yields a larger percentage change (−10% versus 8%) for George. Even though the average causal effect for George and Sarah is +1 in absolute terms, it is −1 in percentage terms.

人们可以通过取所有因果效应的平均值来计算平均因果效应。

我们如何测量反馈效应会影响我们得出的推论。假设我们以百分比变化而不是绝对值来测量血压的变化。然后,根据确切的数字,平均因果效应可能是血压升高。例如,假设乔治的血压在控制下为 154,在处理后为 140。因果效应的绝对大小为 -14,但百分比差异(就 140 的处理水平而言)为 -10%。如果莎拉的血压在治疗下为 200,在控制下为 184,那么因果效应的绝对值是 16,而治疗值是 8%。乔治的血压绝对变化较小(-14 对 16)会产生较大的百分比变化(-10% 对 8%)。

因果推理的基本问题

The fundamental problem of causal inference

The results we have seen up to this point would never be measured in practice. It is impossible, by definition, to observe the effect of more than one treatment on a subject over a specific time period. Joe cannot both take the pill and not take the pill at the same time. Therefore, the data would look something like this:

到目前为止,我们所看到的结果永远无法在实践中衡量。根据定义,不可能在特定时间段内观察多种处理对受试者的影响。乔不能同时服用避孕药和不服用避孕药。因此,数据看起来像这样:

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 ? ?

Question marks are responses that could not be observed. The Fundamental Problem of Causal Inference[2] is that directly observing causal effects is impossible. However, this does not make causal inference impossible. Certain techniques and assumptions allow the fundamental problem to be overcome.

Assume that we have the following data:

问号是无法观察到的反馈。因果推断的基本问题[2]是不可能直接观察因果效应。然而,这并不使因果推断成为不可能。某些技术和假设可以克服基本问题。

假设我们有以下数据:

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 ? ?
玛丽 ? 125 ?
莎莉 100 ? ?
鲍勃 ? 130 ?
詹姆士 ? 120 ?
平均 115 125 −10


We can infer what Joe's potential outcome under control would have been if we make an assumption of constant effect:

如果我们假设效应恒定,我们可以推断出乔在控制下的潜在结果是什么:

[math]\displaystyle{ Y_{t}(u)=T+Y_{c}(u) }[/math]

[math]\displaystyle{ Y_{t}(u)-T=Y_{c}(u) }[/math]

If we wanted to infer the unobserved values we could assume a constant effect. The following tables illustrates data consistent with the assumption of a constant effect.

如果我们想推断未观察到的值,我们可以假设一个恒定的影响。下表说明了与恒定效应假设一致的数据。

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 140 −10
玛丽 115 125 −10
莎莉 100 110 −10
鲍勃 120 130 −10
詹姆士 110 120 −10
平均 115 125 −10

All of the subjects have the same causal effect even though they have different outcomes under the treatment.

所有受试者即使在治疗下有不同的结果,也具有相同的因果效应。

分配机制

The assignment mechanism

The assignment mechanism, the method by which units are assigned treatment, affects the calculation of the average causal effect. One such assignment mechanism is randomization. For each subject we could flip a coin to determine if she receives treatment. If we wanted five subjects to receive treatment, we could assign treatment to the first five names we pick out of a hat. When we randomly assign treatments we may get different answers.

Assume that this data is the truth:

分配机制,即分配单位处理的方法,影响平均因果效应的计算。一种分配机制是随机化。对于每个受试者,我们可以抛硬币来确定她是否接受处理。如果我们希望五个受试者接受处理,我们可以将处理分配给我们从帽子里挑选出来的前五个名字。当我们随机分配处理时,我们可能会得到不同的答案。

假设这个数据是真实的:

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 115 15
玛丽 120 125 −5
莎莉 100 125 −25
鲍勃 110 130 −20
詹姆士 115 120 −5
平均 115 123 −8

The true average causal effect is −8. But the causal effect for these individuals is never equal to this average. The causal effect varies, as it generally (always?) does in real life. After assigning treatments randomly, we might estimate the causal effect as:

真正的平均因果效应是 -8。但是对这些人的因果效应永远不会等于这个平均值。因果效应各不相同,因为它通常(总是?)在现实生活中也是如此。在随机分配处理后,我们可以估计因果效应为:

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 ? ?
玛丽 120 ? ?
莎莉 ? 125 ?
鲍勃 ? 130 ?
詹姆士 115 ? ?
平均 121.66 127.5 −5.83

A different random assignment of treatments yields a different estimate of the average causal effect.

处理的不同随机分配产生对平均因果效应的不同估计。

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 ? ?
玛丽 120 ? ?
莎莉 100 ? ?
鲍勃 ? 130 ?
詹姆士 ? 120 ?
平均 116.67 125 −8.33

The average causal effect varies because our sample is small and the responses have a large variance. If the sample were larger and the variance were less, the average causal effect would be closer to the true average causal effect regardless of the specific units randomly assigned to treatment.

Alternatively, suppose the mechanism assigns the treatment to all men and only to them.

平均因果效应会有所不同,因为我们的样本很小并且反馈效应的方差很大。如果样本较大且方差较小,则无论随机分配给处理的特定单位如何,平均因果效应将更接近真实的平均因果效应。

或者,假设该机制将处理分配给所有男性且仅分配给他们。

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 ? ?
鲍勃 110 ? ?
詹姆士 105 ? ?
玛丽 ? 130 ?
莎莉 ? 125 ?
苏茜 ? 135 ?
平均 115 130 −15

Under this assignment mechanism, it is impossible for women to receive treatment and therefore impossible to determine the average causal effect on female subjects. In order to make any inferences of causal effect on a subject, the probability that the subject receive treatment must be greater than 0 and less than 1.

在这种分配机制下,女性不可能接受处理,因此无法确定对女性受试者的平均因果效应。为了对受试者做出因果效应的任何推断,受试者接受治疗的概率必须大于 0 且小于 1。

完美的医生

The perfect doctor

Consider the use of the perfect doctor as an assignment mechanism. The perfect doctor knows how each subject will respond to the drug or the control and assigns each subject to the treatment that will most benefit her. The perfect doctor knows this information about a sample of patients:

考虑使用完美医生作为分配机制。完美的医生知道每个受试者对药物或对照的反应如何,并为每个受试者分配对她最有益的处理。完美的医生知道有关患者样本的以下信息:

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
130 115 15
鲍勃 120 125 −5
詹姆士 100 150 −50
玛丽 115 125 −10
莎莉 120 130 −10
苏茜 135 105 30
平均 120 125 −5

Based on this knowledge she would make the following treatment assignments:

基于这些知识,她将进行以下处理分配:

主题 [math]\displaystyle{ Y_{t}(u) }[/math] [math]\displaystyle{ Y_{c}(u) }[/math] [math]\displaystyle{ Y_{t}(u)-Y_{c}(u) }[/math]
? 115 ?
鲍勃 120 ? ?
詹姆士 100 ? ?
玛丽 115 ? ?
莎莉 120 ? ?
苏茜 ? 105 ?
平均 113.75 110 3.75

The perfect doctor distorts both averages by filtering out poor responses to both the treatment and control. The difference between means, which is the supposed average causal effect, is distorted in a direction that depends on the details. For instance, a subject like Susie who is harmed by taking the drug would be assigned to the control group by the perfect doctor and thus the negative effect of the drug would be masked.

完美的医生通过过滤掉对处理和控制的不良反应来扭曲这两个平均值。均值之间的差异,即假定的平均因果效应,在取决于细节的方向上发生扭曲。例如,像Susie这样因服药而受到伤害的受试者会被完美的医生分配到对照组,从而掩盖了药物的负面影响。

结论

Conclusion

The causal effect of a treatment on a single unit at a point in time is the difference between the outcome variable with the treatment and without the treatment. The Fundamental Problem of Causal Inference is that it is impossible to observe the causal effect on a single unit. You either take the aspirin now or you don't. As a consequence, assumptions must be made in order to estimate the missing counterfactuals.

The Rubin causal model has also been connected to instrumental variables (Angrist, Imbens, and Rubin, 1996)[6] and other techniques for causal inference. For more on the connections between the Rubin causal model, structural equation modeling, and other statistical methods for causal inference, see Morgan and Winship (2007).[7]

在某个时间点对单个单位的处理的因果效应是经过处理和未经过处理的结果变量之间的差异。因果推断的基本问题是不可能观察到对单个单元的因果效应。你要么现在服用阿司匹林,要么不服用。因此,必须做出假设以估计缺失的反事实。

Rubin 因果模型还与工具变量(Angrist、Imbens 和 Rubin,1996 年)[6]和其他因果推断技术相关联。有关 Rubin 因果模型、结构方程建模和其他因果推断统计方法之间联系的更多信息,请参见 Morgan 和 Winship (2007)。[7]

另见

- 因果关系 - 主要分层 - 倾向得分匹配


参考文献

  1. 1.0 1.1 Sekhon, Jasjeet (2007). "The Neyman–Rubin Model of Causal Inference and Estimation via Matching Methods". The Oxford Handbook of Political Methodology. http://sekhon.berkeley.edu/papers/SekhonOxfordHandbook.pdf. 
  2. 2.0 2.1 Holland, Paul W. (1986). "Statistics and Causal Inference". J. Amer. Statist. Assoc. 81 (396): 945–960. doi:10.1080/01621459.1986.10478354. JSTOR 2289064.
  3. Neyman, Jerzy. Sur les applications de la theorie des probabilites aux experiences agricoles: Essai des principes. Master's Thesis (1923). Excerpts reprinted in English, Statistical Science, Vol. 5, pp. 463–472. (D. M. Dabrowska, and T. P. Speed, Translators.)
  4. Rubin, Donald (2005). "Causal Inference Using Potential Outcomes". J. Amer. Statist. Assoc. 100 (469): 322–331. doi:10.1198/016214504000001880.
  5. 5.0 5.1 Rubin, Donald (1974). "Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies". J. Educ. Psychol. 66 (5): 688–701 [p. 689]. doi:10.1037/h0037350.
  6. Angrist J.,Imbens G.,Rubin D. (1996) Identification of Causal effects Using Instrumental Variables.J. Amer. Statist. Assoc.91.434:(444–455)
  7. Morgan S.,Winship C. (2007) Counterfactuals and Causal Inference: Methods and Principles for Social Research.