Abstract
AbstractGovernment‐funded training programs in the United States have often been subject to rigorous evaluation. Indeed, many of these programs have been evaluated with random assignment, although sophisticated quasi‐experimental methods have also been used. Until very recently, however, there has been little systematic attempt to use the cumulative information vested in these evaluations to attempt determine which kinds of programs work best in which setting and with respect to which types of client. Meta‐analysis—a set of statistical procedures for systematically synthesizing findings from separate studies—can, in theory at least, address these and other topics that evaluation of individual programs cannot. This article discusses the steps in conducting such a synthesis, summarizes the results of three recently conducted meta‐analyses of training and welfare‐to‐work programs, identifies limitations to the meta‐analytic approach, and considers ways in which some of these limitations can be overcome.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.