Abstract

In research, the price of false positive or false negative is the incorrect alteration of theory. The cost of those same errors in the evaluation of social and educational programs is human and material. Errors made in the judgment of social programs may affect the expenditure of many dollars and the loss or gain of many jobs, waste of limited resources, or failure to relieve important social problems and human need. Thus the conservatism in theoretical research that makes a no conclusion far more likely than results is not always appropriate in evaluation of programs. The failure to detect an existing effect in a social program may generally have consequences as serious as demonstrating effects that do not, in fact, exist (Cronbach & Associates, 1980). One important source of false value claims for a program being evaluated is statistical conclusion validity (Lindvall & Nitko, 1981), which is concerned with the sensitivity of the study and the reasonableness of the evidence for causation. Lindvall and Nitko point out that if program evaluation is to address the important issue of ecological validity (explication of the specific contexts in which the effects will or will not occur), that causal modeling must be adequate and plausible. Yet even when experimental designs that can demonstrate causation are used, disregard of psychometric properties of scales and inappropriate use of statistics is common (Achilles, 1982). Program evaluation thus suffers from results, both positive and negative, which are often reversed by later studies or are used as the basis for decisions which are later regretted. Nevertheless, decisions about social and educational programs must be made, and evaluation is needed and/or required, in spite of the difficulties. Static group comparisons are a common design for contracted, external evaluations of intact programs, as are ex post facto models, even though more experimental designs were originally planned (Achilles, 1982). These preexperimental designs have important weaknesses both for eliminating alternate explanations of demonstrated effects and for providing the important cause-effect linkage (Campbell & Stanley, 1963; Kerlinger, 1973). Nevertheless, the reality of program evaluation is that the evaluator will often be faced with this less than ideal situation. The present study was done to determine if strategies could be applied that would increase the statistical conclusion validity of an evaluation of a career education program which had already suffered from nearly all of the problems described by Achilles (1982) as common to field research. This article is based in part on work done under contract with CEMREL, Inc., and the St. Louis Agency for Training and Employment, and submitted to St. Louis University as a doctoral dissertation. The author gratefully acknowledges the comments of two anonymous reviewers on a draft of this article.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call