Abstract

The practical importance of quasi-experimental (QE) studies in education has been growing due to increasing availability and reliability of longitudinal databases maintained by state education agencies. With access to such a database, the impacts on student achievement of a program implemented in a few schools in one district anywhere in the state can, for example, be evaluated by drawing comparable students from anywhere across the state. These databases present special opportunities because of their comprehensive nature but also challenges because of their size. The problem we address in this paper is how to select a comparison group that takes advantage of the richness and the size of a state data system while allowing the computation to be accomplished in a reasonable amount of time given the technology currently available to researchers. Most importantly: do this while optimizing the similarity of the students in the comparison group to the students in the group implementing the program.The usefulness of a QE is, of course, dependent on the similarity of the treatment and comparison groups. Unlike in a randomized control trial (RCT) where we can expect the treatment group and control group to be equivalent, program evaluation using comparison data selected after the fact cannot be assumed to be equivalent to the treated, and this can lead to biased estimates of program effect. A variety of analytical strategies have been developed to approximate the conditions of an RCT and therefore minimize the selection bias.Though a considerable body of research has been devoted to analyzing empirical problems of QE studies of job training programs, very little work of this kind has been conducted in the context of large-scale evaluations in K-12 education. For recent contributions and surveys of earlier studies, see Glazerman, Levy, and Myers (2003), Bloom, Michalopoulos, and Hill (2005), and Cook, Shadish & Wong (2008). In addition, existing program evaluation methods have been developed and tested on relatively small datasets — typically thousands of observations — while a student information system of a typical state can contain millions of records. Given the limited computing resources available to most mainstream and many academic researchers, the issues of computational efficiency of analytical methods come to the foreground.We propose, and test through a computational experiment, an approach that takes advantage of what we suspect is a tendency of K-12 practitioners to adopt new programs in some schools and not others. For example, a school district administrator running a pilot will likely talk to school principals to find those interested, while teacher teams may often share innovations within a school rather than across schools. Thus, an optimal approach would involve a two-stage matching process, which first finds comparable schools to the ones participating in the program and then from the matching schools, identifies students that match the individuals in the program.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.