Skip to main content
Journal cover image
Learning More from Social Experiments: Evolving Analytic Approaches

Constructing instrumental variables from experimental data to explore how treatments produce effects

Publication ,  Chapter
Gennetian, LA; Morris, PA; Bos, JM; Bloom, HS
December 1, 2006

ARANDOM-ASSIGNMENT study can provide the most compelling evidence possible about how an intervention - be it social, economic, legal, or medical - affects the people to whom it is targeted. Randomization entails using a lotterylike process to assign each eligible sample member either to a group that is offered the intervention or to a group that is not. This process ensures that the two groups are the same in every way (in statistical expectation), except that one group is assigned to the intervention and the other is not. Any statistically significant differences between the two groups that are subsequently observed can be confidently attributed to the intervention. This is why randomization, often referred to as the gold standard for studying cause-and-effect relationships, is now widely used in many fields of research. In medicine, more than 350,000 randomized clinical trials have been conducted during the last fifty years (Cochrane Collaboration 2002). In social policy, a total of more than 800,000 people were randomly assigned in two hundred twenty studies between 1962 and 1997 (Greenberg and Shroder 1997). Reflecting and reinforcing these trends, the Institute of Education Sciences was created within the U.S. Department of Education in 2002 in the belief that amassing rigorous, credible evidence about the effects of education interventions calls for randomized control trials (U.S. Department of Education 2003). The spread of randomization in social policy research has not been confined to the United States. In the developing world, the effectiveness of a variety of poverty-reduction programs, particularly ones focused on improving children's education attainment, are increasingly likely to be evaluated using random-assignment designs (see, for example, Schultz 2001). Here we refer to any study that uses random assignment as an experiment. Problems related to sample selection, randomization, and attrition can undermine experimental designs, thereby preventing researchers from drawing valid causal inferences and reducing the policy relevance of the research. Although these problems deserve careful consideration, this chapter does not address them. Key assumptions underlying the present discussion are that an experiment's random-assignment process is well designed and successfully executed and that data collection is complete (or nearly complete) for the individuals or other entities under study. But even well-realized experiments have limitations that are important to recognize if one is to interpret findings correctly and to design studies that provide the highest-quality, most policy-relevant information possible. We briefly review these limitations before introducing the methodological approach that is the centerpiece of this chapter. First, even when random assignment is feasible, full compliance with its outcome usually cannot be assured. For example, one can randomize the offer of a new medical treatment to patients afflicted with a particular type of cancer, but one cannot guarantee full randomization of its receipt without forcing some patients to accept it against their will, which is unethical. Similarly, one can randomize the offer of subsidized child care for low-income families, but one cannot guarantee full randomization of its use because families have the right to choose their children's care. And one can randomize the offer of an education voucher that enables people to move their children to a better school, but one cannot randomize such moves without encountering tremendous political resistance. This limitation of the experimental approach is often not fully appreciated or understood. The questions at its heart - namely, who has the opportunity to take advantage of an offered treatment and who actually takes advantage of it - are fundamentally different from the question of what dosage of the treatment is offered. A second limitation of experimental research is that it can be used to study only interventions to which entities can be randomly assigned. Because randomization is sometimes impossible or unacceptable, many important questions cannot be directly addressed using the experimental approach. To take a clear-cut example, the effects of birth parents' demographic characteristics on their children cannot be studied by randomly assigning children to parents or parents to children. Finally, although experiments are the most powerful known way to assess the causal effects of an intervention on an outcome, they do not by themselves provide much insight into how these effects are brought about. This issue is particularly important for interventions that consist of numerous components, any combination of which might be responsible for an observed effect. For example, testing a multifaceted reading curriculum by randomly assigning each member of a sample of firstgrade students to a group that is exposed to the new curriculum or to a group that is not is an excellent way to measure any subsequent changes in reading achievement caused by the curriculum (relative to the curriculum to which students would have otherwise been exposed). However, without more information, additional assumptions, or both, the experiment will not provide compelling evidence about how or why the curriculum does or does not improve reading achievement or about the relative effectiveness of its various components. This limitation of the experimental paradigm is often referred to as the "black box" problem. Simply put, the problem is that experiments are good at documenting the linkages, or the lack thereof, between an intervention (the input to the black box) and outcomes (the output of the black box), but they provide little or no direct information about why the intervention did or not did affect the outcome. The goal of this chapter is to present an analytic approach that combines randomized experiments with a well-known nonexperimental method from econometrics called instrumental-variables estimation. Although coupling the instrumental-variables approach with experimental data is not new, we extend it to enable researchers to address questions about how the outcomes observed in an experiment might have been affected by multiple factors associated with the treatment. As discussed in the literature review that follows, the technique holds promise as a way to mitigate the three limitations of experimental research already noted. Appropriately used, the instrumental-variables approach can broaden the range of policy issues that can be effectively addressed to include questions such as the following: What is the effect of taking small daily doses of aspirin on the incidence and severity of future heart attacks among men over fifty? What is the effect of serving in the military on the future earnings of people who render military service during wartime? What is the effect of families' moving from an impoverished neighborhood to a better-off neighborhood on the future criminal behavior of their adolescent children? What is the effect of attending job training on the future earnings of young school dropouts? What are the effects on children of policy-induced increases in their parents' future earnings and income? In the next section of this chapter, we lay out a conceptual and statistical framework that will allow us to describe the limits of experimental evidence (even in well-realized random assignment designs), illustrate how research analysis can be extended beyond these limits, and delineate the conditions under which such extension is possible. To introduce the framework, we present and discuss a series of effect estimators that may be used to answer different policy questions of interest. These estimators form the foundation of the instrumental-variables analysis presented in the third section of the chapter, in which we illustrate how the technique can be applied to experimental data to explore the causal paths by which an intervention produces its observed effects. In the concluding section, we reflect on the potential of the instrumental variables approach and the conditions necessary for its success. Copyright © 2005 by Russell Sage Foundation. All rights reserved.

Duke Scholars

ISBN

9780871541338

Publication Date

December 1, 2006

Start / End Page

75 / 114
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Gennetian, L. A., Morris, P. A., Bos, J. M., & Bloom, H. S. (2006). Constructing instrumental variables from experimental data to explore how treatments produce effects. In Learning More from Social Experiments: Evolving Analytic Approaches (pp. 75–114).
Gennetian, L. A., P. A. Morris, J. M. Bos, and H. S. Bloom. “Constructing instrumental variables from experimental data to explore how treatments produce effects.” In Learning More from Social Experiments: Evolving Analytic Approaches, 75–114, 2006.
Gennetian LA, Morris PA, Bos JM, Bloom HS. Constructing instrumental variables from experimental data to explore how treatments produce effects. In: Learning More from Social Experiments: Evolving Analytic Approaches. 2006. p. 75–114.
Gennetian, L. A., et al. “Constructing instrumental variables from experimental data to explore how treatments produce effects.” Learning More from Social Experiments: Evolving Analytic Approaches, 2006, pp. 75–114.
Gennetian LA, Morris PA, Bos JM, Bloom HS. Constructing instrumental variables from experimental data to explore how treatments produce effects. Learning More from Social Experiments: Evolving Analytic Approaches. 2006. p. 75–114.
Journal cover image

ISBN

9780871541338

Publication Date

December 1, 2006

Start / End Page

75 / 114