Abstract

Evaluation of large-scale educational programs is problematical because of inherent bias in the assignment of treatment and comparison groups. As a result, the ANOVA design is inapplicable, and even ANCOVA designs can give rise to serious regression artifacts. Data from the Follow Through Program are used to illustrate this point: samples were kindergarteners in the Responsive Education model and in best-match comparison classrooms. The criterion variable was MRT readiness level at posttest. Lord’s True Scores ANCOVA was shown to be a more powerful method in correcting for initial differences than the conventional ANCOVA. These data also were used to illustrate the problem of non-uniformity of program implementation across sites and classrooms. An index of implementation level by classroom was used to predict outcome levels, and the potential of this approach as an adjunct in comparative analysis was discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call