Abstract

Increasing availability of funds for development, design, and evaluation of alternative learning environments has challenged educational researchers to develop and validate innovative and effective interventions. The focus on accountability has resulted in an accelerated effort to record events, activities, and participation in substantive ways that suggest significance, statistical and otherwise, and that warrant further program improvements and modification. Yet, relying on traditional individual standardized measures—ones that are specifically designed to discriminate among students and that are better suited to the study of controlled experiments in laboratories rather than the sporadic and often spontaneous interactions common to learning settings in and out of school—leaves educational researchers generally ill equipped. Even as alternative educational programs are financially supported, the sanctioned means with which researchers and program developers document success of all educational programs have progressively narrowed, favoring traditional experimental designs with an emphasis on whether it works rather than on understanding why the program is successful. In this article, we used a multimethod, multilevel analysis to document the underlying dynamics of specific alternative learning contexts to identify generalizable principles while allowing for local variation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call