Chapter 72 Econometric Evaluation of Social Programs, Part III: Distributional Treatment Effects, Dynamic Treatment Effects, Dynamic Discrete Choice, and General Equilibrium Policy Evaluation
Chapter 72 Econometric Evaluation of Social Programs, Part III: Distributional Treatment Effects, Dynamic Treatment Effects, Dynamic Discrete Choice, and General Equilibrium Policy Evaluation
- Research Article
358
- 10.1086/260297
- Mar 1, 1974
- Journal of Political Economy
In recent years, Congress has considered a variety of work-subsidy programs designed to encourage work among welfare recipients. Many of these programs would subsidize individuals only if they work some minimum number of hours. Commonly used techniques cannot give direct answers to relevant policy questions since a tied offer is involved, and hence the offer cannot be treated as a simple wage change. The essence of the problem involves utility comparisons between two or more discrete alternatives. Such comparisons inherently require information about consumer preferences in a way not easily obtained from ordinary labor-supply functions. To make such comparisons, I present a method for directly estimating consumer indifference surfaces between money income and nonmarket time. Once these surfaces are determined, they can be used to compare a variety of alternative programs to investigate whether or not there is scope for Pareto-optimal redistribution of income transfers and time, improving the general level of welfare of the community at large without reducing the welfare of individuals receiving income transfers. Knowledge of these indifference surfaces allows us to estimate reservation wages to estimate the value of nonworking-women's time (Gronau 1973), laborforce participation functions, hours-of-work functions, and welfare losses due to income tax programs (Harberger 1964). I demonstrate that direct estimation of indifference surfaces allows us, at least in principle, to relax
- Research Article
116
- 10.1016/j.labeco.2007.06.002
- Jun 19, 2007
- Labour Economics
Identifying and Estimating the Distributions of Ex Post and Ex Ante Returns to Schooling
- Book Chapter
309
- 10.1016/s1573-4412(07)06071-0
- Jan 1, 2007
- Handbook of Econometrics
Chapter 71 Econometric Evaluation of Social Programs, Part II: Using the Marginal Treatment Effect to Organize Alternative Econometric Estimators to Evaluate Social Programs, and to Forecast their Effects in New Environments
- Research Article
15
- 10.2139/ssrn.826452
- Jan 1, 2005
- SSRN Electronic Journal
This paper considers semiparametric identification of structural dynamic discrete choice models and models for dynamic treatment effects. Time to treatment and counterfactual outcomes associated with treatment times are jointly analyzed. We examine the implicit assumptions of the dynamic treatment model using the structural model as a benchmark. For the structural model we show the gains from using cross equation restrictions connecting choices to associated measurements and outcomes. In the dynamic discrete choice model, we identify both subjective and objective outcomes, distinguishing ex post and ex ante outcomes. We show how to identify agent information sets.
- Research Article
297
- 10.1016/j.jeconom.2005.11.002
- Feb 14, 2006
- Journal of Econometrics
Dynamic discrete choice and dynamic treatment effects
- Single Report
6
- 10.3386/t0316
- Oct 1, 2005
Dynamic Discrete Choice and Dynamic Treatment Effects
- Research Article
5
- 10.3102/01623737007001035
- Mar 1, 1985
- Educational Evaluation and Policy Analysis
In research, the price of false positive or false negative is the incorrect alteration of theory. The cost of those same errors in the evaluation of social and educational programs is human and material. Errors made in the judgment of social programs may affect the expenditure of many dollars and the loss or gain of many jobs, waste of limited resources, or failure to relieve important social problems and human need. Thus the conservatism in theoretical research that makes a no conclusion far more likely than results is not always appropriate in evaluation of programs. The failure to detect an existing effect in a social program may generally have consequences as serious as demonstrating effects that do not, in fact, exist (Cronbach & Associates, 1980). One important source of false value claims for a program being evaluated is statistical conclusion validity (Lindvall & Nitko, 1981), which is concerned with the sensitivity of the study and the reasonableness of the evidence for causation. Lindvall and Nitko point out that if program evaluation is to address the important issue of ecological validity (explication of the specific contexts in which the effects will or will not occur), that causal modeling must be adequate and plausible. Yet even when experimental designs that can demonstrate causation are used, disregard of psychometric properties of scales and inappropriate use of statistics is common (Achilles, 1982). Program evaluation thus suffers from results, both positive and negative, which are often reversed by later studies or are used as the basis for decisions which are later regretted. Nevertheless, decisions about social and educational programs must be made, and evaluation is needed and/or required, in spite of the difficulties. Static group comparisons are a common design for contracted, external evaluations of intact programs, as are ex post facto models, even though more experimental designs were originally planned (Achilles, 1982). These preexperimental designs have important weaknesses both for eliminating alternate explanations of demonstrated effects and for providing the important cause-effect linkage (Campbell & Stanley, 1963; Kerlinger, 1973). Nevertheless, the reality of program evaluation is that the evaluator will often be faced with this less than ideal situation. The present study was done to determine if strategies could be applied that would increase the statistical conclusion validity of an evaluation of a career education program which had already suffered from nearly all of the problems described by Achilles (1982) as common to field research. This article is based in part on work done under contract with CEMREL, Inc., and the St. Louis Agency for Training and Employment, and submitted to St. Louis University as a doctoral dissertation. The author gratefully acknowledges the comments of two anonymous reviewers on a draft of this article.
- Research Article
1
- 10.17759/jmfp.2016050205
- Jan 1, 2016
- Современная зарубежная психология
This article discusses the development of children's participation in the evaluation of social programs and evidences of the positive impact of this kind of participation on the psychological development of children. The article includes the overview of 27 foreign studies concerning children and organizing the evidences of the impact provided by the child participatory behavioron his/her psychological development. The review shows that inclusion of children in the evaluationprogram can contribute to positive development of theirs. The authors also discuss the inclusion of children in the evaluation of social projects and programs, and the opportunity for developmentof thispractice in Russia
- Research Article
52
- 10.1016/j.jeconom.2015.12.001
- Dec 29, 2015
- Journal of Econometrics
Dynamic treatment effects
- Research Article
112
- 10.1162/003465398557203
- Feb 1, 1998
- Review of Economics and Statistics
This paper explores issues that arise in the evaluation of social programs using experimental data in the frequently encountered case where some of the experimental treatment group members drop out of the program prior to receiving treatment. We begin with the standard estimator for this case and the identifying assumption upon which it rests. We then examine the behavior of the estimator when the dropouts receive a partial “dose” of the program treatment prior to dropping out of the program. In the case of partial treatment, the identifying assumption is typically violated, thereby making the estimator inconsistent for the conventional parameter of interest: the impact of full treatment on the fully treated. We develop a test of the identifying assumption underlying the standard estimator and consider whether exclusion restrictions produce identification of the mean impact of the program when this assumption fails to hold. Finally, we discuss alternative parameters of interest in the presence of partial treatment among the dropouts and argue that the conventional parameter is not always the economically interesting one. We apply our methods to data from a recent experimental evaluation of the Job Training Partnership Act (JTPA) program.
- Research Article
51
- 10.1176/appi.ps.54.8.1087
- Aug 1, 2003
- Psychiatric Services
Not available
- Single Book
2
- 10.1002/9781444307399
- Oct 17, 2008
I. Preface (Maureen A. Pirog). II. The State of Social Experimentation and Program (Maureen A. Pirog). III. Social Experiments Versus Quasi-Experiments. The Role of Random Assignment in Social Policy Research (Richard P. Nathan). The Role of Random Assignment in Social Policy Research (Robinson Hollister). Nathan Response to Robinson Hollister's Opening Statement (Richard P. Nathan). Hollister Response to Richard Nathan's Opening Statement (Robinson Hollister). Do Experimental and Nonexperimental Evaluations Give Different Answers About the Effectiveness of Government-Funded Training Programs? (David H. Greenberg, Charles Michalopoulos, and Philip K. Robins). How Close is Close Enough? Evaluating Propensity Score Matching Using Data from a Class Size Reduction Experiment (Elizabeth Ty Wilde and Robinson Hollister). Three Conditions under which Experiments and Observational Studies Produce Comparable Causal Estimates: New findings from within-study comparisons 27(4) (Thomas D. Cook, William R. Shadish, and Vivian C. Wong). IV. Randomized Experiments. Impacts of Abstinence-Only Education on Teen Sexual Activity, Risk of Pregnancy, and Risk of Sexually Transmitted Diseases (Barbara Devaney, Chris Trenholm, Ken Forston, Melissa Clark, Lisa Quay, and Justin Wheeler). Five-Year Effects of an Anti-Poverty Program on Marriage among Never-Married Mothers (Anna Gassman-Pines and Hirokazu Yoshikawa). Alternative Routes to Teaching: The Impacts of Teach for America on Student Achievement and Other Outcomes (Steven Glazerman, Daniel Mayer, and Paul Decker). V. Quasi-Experiments A. Natural Experiments. Anti-Depressants, Suicide, and Drug Regulation (Jens Ludwig and Dave E. Marcotte). Lowering Blood Alcohol Content Levels to Save Lives: The European Experience (Daniel Albalate). The Impact of the Family and Medical Leave Act (Jane Waldfogel). The Effects of State and Local Antidiscrimination Policies on Earnings for Gays and Lesbians (Marieka M. Klawitter and Victor Flatt). B. Pretests and Posttests with Comparison Groups and Selection Controls. A Cure for Crime: Can Mental Health Treatment Diversion Reduce Crime among Youth? (Allison Evans Cuellar, Larkin S. McReynolds, and Gail A. Wasserman). Cashing Out Food Stamps: Impacts on Food Expenditures and Diet Quality (Barbara Devaney and Thomas Fraker). C. Interrupted Time Series with Comparison Groups. The Effect of Drinking Age Laws and Alcohol-Related Crashes: Time-Series Evidence from Wisconsin (David N. Figlio). Evaluating the Effects of Automobile Safety Regulation (John D. Graham and Steven Garber). D. Posttests Only with Comparison Groups. Does WIC Work? The Effects of WIC on Pregnancy and Birth Outcomes (Marianne P. Bitler and Janet Currie). The Changing Association between Prenatal Participation in WIC and Birth Outcomes in New York City (Ted Joyce, Diane Gibson, and Silvie Colman). The Changing Association between Prenatal Participation in WIC and Birth Outcomes in New York City: What Does It Mean? (Marianne P. Bitler and Janet Currie). Interpreting the WIC DEBATE (Jens Ludwig and Mathew Miller). E. Regression Discontinuity Design. An Effectiveness-Based of Five State Pre-Kindergarten Programs (Vivian C. Wong, Thomas D. Cook, W. Steven Barnett, and Kwanghee Jung). VII. Meta-Analyses. Assessing Evidence of Environmental Inequities: A Meta-Analysis (Evan J. Ringquist). VIII. Implementation, Performance Management, and Program Impacts. Linking Program Implementation and Effectiveness: Lessons from a Pooled Sample of Welfare-to-Work Experiments (Howard S. Bloom, Carolyn J. Hill, and James A. Riccio). Exploring the Relationship between Performance Management and Program Impact: A Case Study of the JPTA (Burt S. Barnow). IX. Ethics and Human Subjects. a More Public Discussion of the Ethics of Federal Social Program (Jan Blustein). The Ethics of Federal Social Program Evaluation: A Response to (Blustein Burt S. Barnow). To Learn or Not to Learn (Howard Rolston). Comments on Dr. Blustein's Paper, Toward a More Public Discussion of the Ethics of Federal Social Program Evaluation (Peter Z. Schochet). Jan Blustein's Response (Jan Blustein). X. The Use of Program Evaluations by Policy Makers. The Dissemination and Utilization of Welfare-to-Work Experiments in State Policymaking (David Greenberg, Marvin Mandell, and Matthew Onstott).
- Preprint Article
3
- 10.1688/1861-9916_ijar_2011_01_silva
- Jan 1, 2011
- International journal of action research
"This article discusses the evaluation of social policies and programmes in \nthe perspective of evaluation research. It tries to develop a methodology \nthat has a participatory content. Thus, the evaluation of social policies and \nprogrammes is considered in its full potential for the construction of \nknowledge. It is seen as a development of the processes of public policies \nthat involves different subjects, who have different interests and rationalities. \nIn the construction of a concept of a participatory evaluation research, \nthe article takes into account its technical, political and academic \nfunctions. Therefore, it reaffirms two dimensions of evaluation research: \ntechnical and political. The commitment of the evaluator-researcher to the \ncritique of reality in the search for its transformation is the reference for \nthe development of a participatory approach in evaluation research. \nThe paper presents an introduction that describes the origins of what is \nconsidered as a participatory approach for evaluation of social policies \nand programmes, followed by developing reflections about evaluation \nas a part of the process of public policies; presents a concept of evaluation \nresearch in order to consider, in the following sections, details of the \nconstruction of a participatory concept and approach in evaluation \nresearch." (author's abstract)
- Research Article
9
- 10.1177/008124630603600106
- Mar 1, 2006
- South African Journal of Psychology
This article suggests that psychologists may find value in the literature on programme evaluation, both theoretically and methodologically. Programme evaluation is an eclectic and diverse field and its literature reflects the contributions of persons trained within a variety of disciplines. It draws on a number of fields, which include management and organisational theory, policy analysis, education, sociology, social anthropology and the literature on social change. As such, the literature on programme evaluation may have value for psychologists planning evaluations of social programmes, in providing access to evaluation approaches and models developed within these different traditions. In terms of the breadth of perspectives and research traditions on which the evaluation literature draws, different forms of evaluation research can contribute to a psychology in South Africa which deals with multiple values and issues. On a theoretical level, this article suggests that the issues and debates reflected in the evaluation literature (e.g., those on empowerment) mirror debates that have occurred within the mainstream of psychology over the past 20 years. For this reason, the issues raised in the evaluation literature are relevant to the development of psychology as a discipline. The approaches and models proposed for the evaluation of social programmes are also potentially useful on a methodological level, particularly to those psychologists who work in community settings.
- Research Article
1
- 10.1080/01621459.2024.2314316
- Mar 9, 2024
- Journal of the American Statistical Association
Many modern tech companies, such as Google, Uber, and Didi, use online experiments (also known as A/B testing) to evaluate new policies against existing ones. While most studies concentrate on average treatment effects, situations with skewed and heavy-tailed outcome distributions may benefit from alternative criteria, such as quantiles. However, assessing dynamic quantile treatment effects (QTE) remains a challenge, particularly when dealing with data from ride-sourcing platforms that involve sequential decision-making across time and space. In this article, we establish a formal framework to calculate QTE conditional on characteristics independent of the treatment. Under specific model assumptions, we demonstrate that the dynamic conditional QTE (CQTE) equals the sum of individual CQTEs across time, even though the conditional quantile of cumulative rewards may not necessarily equate to the sum of conditional quantiles of individual rewards. This crucial insight significantly streamlines the estimation and inference processes for our target causal estimand. We then introduce two varying coefficient decision process (VCDP) models and devise an innovative method to test the dynamic CQTE. Moreover, we expand our approach to accommodate data from spatiotemporal dependent experiments and examine both conditional quantile direct and indirect effects. To showcase the practical utility of our method, we apply it to three real-world datasets from a ride-sourcing platform. Theoretical findings and comprehensive simulation studies further substantiate our proposal. Supplementary materials for this article are available online Code implementing the proposed method is also available at: https://github.com/BIG-S2/CQSTVCM.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.