Abstract

No AccessJan 2022Instrumental Variables Estimation and Local Average Treatment EffectsAuthors/Editors: Paul Glewwe, Petra ToddPaul GlewweSearch for more papers by this author, Petra ToddSearch for more papers by this authorhttps://doi.org/10.1596/978-1-4648-1497-6_ch15AboutView ChaptersFull TextPDF (0.9 MB) ToolsAdd to favoritesDownload CitationsTrack Citations ShareFacebookTwitterLinked In Abstract: Explains that instrumental variables (IV) methods have two general uses for programs, projects, or policies evaluation: (1) for a randomized controlled trial (RCT) contaminated because some individuals or groups randomly assigned to the treatment group refuse treatment, or because some individuals or groups randomly assigned to the control group somehow proved able to be treated, IV methods allow the local average treatment effect (LATE) estimate; and (2) more generally, IV methods provide a way to estimate treatment effects when problems of selection bias exist. This second point applies not only to RCT evaluations but also to evaluations based on nonrandomized (nonexperimental) data. However, some difficulties arise with using IV methods to estimate program impacts. The instrumental variables must satisfy certain assumptions, and thus it may prove hard to find credible instrumental variables; and in general, estimation of average treatment effect (ATE) or average treatment effect on the treated (ATT) via instrumental variables requires assumptions that may not hold. ReferencesAngrist, Joshua, Eric Bettinger, Erik Bloom, Elizabeth King, and Michael Kremer. 2002. “Vouchers for Private Schooling in Colombia: Evidence from a Randomized Natural Experiment.” American Economic Review 92 (5): 1535–58. CrossrefGoogle ScholarDupas, Pascaline. 2014. “Short-Run Subsidies and Long-Run Adoption of New Health Products: Evidence from a Field Experiment.” Econometrica 82 (1): 197–228. CrossrefGoogle ScholarHeckman, James. 1997. “Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations.” Journal of Human Resources 32 (3): 441–62. CrossrefGoogle ScholarHorowitz, Joel. 2011. “Applied Nonparametric Instrumental Variables Estimation.” Econometrica 79 (2): 347–94. CrossrefGoogle ScholarImbens, Guido and Joshua Angrist. 1994. “Identification and Estimation of Local Average Treatment Effects.” Econometrica 62 (2): 467–75. CrossrefGoogle ScholarImbens, Guido and Jeffrey Wooldridge. 2009. “Recent Developments in the Econometrics of Program Evaluation.” Journal of Economic Literature 47 (1): 5–87. CrossrefGoogle ScholarKarlan, Dean and Jonathan Zinman. 2008. “Credit Elasticities in Less-Developed Economies: Implications for Microfinance.” American Economic Review 89 (3): 1040–68. CrossrefGoogle ScholarKarlan, Dean and Jonathan Zinman. 2009. “Observing Unobservables: Identifying Information Asymmetries with a Consumer Credit Field Experiment.” Econometrica 77 (6): 1993–2008. CrossrefGoogle ScholarSchultz, T. Paul and Aysit Tansel. 1997. “Wage and Labor Supply Effects of Illness in Côte d’Ivoire and Ghana: Instrumental Variable Estimates for Days Disabled.” Journal of Development Economics 53 (2): 251–86. CrossrefGoogle ScholarThornton, Rebecca. 2008. “The Demand for, and Impact of, Learning HIV Status.” American Economic Review 98 (5): 1829–63. CrossrefGoogle Scholar Previous chapterNext chapter FiguresreferencesRecommendeddetails View Published: January 2022ISBN: 978-1-4648-1497-6e-ISBN: 978-1-4648-1498-3 Copyright & Permissions Related TopicsMacroeconomics and Economic GrowthScience and Technology Development KeywordsIMPACT EVALUATIONMONITORING AND EVALUATIONM&EPERFORMANCE EVALUATIONEVALUATION APPROACHESREGRESSION ANALYSISPOLICY DESIGN AND IMPLEMENTATIONRANDOMIZED CONTROLLED TRIALSRCTS PDF DownloadLoading ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call