Abstract

International Large Scale Assessments (ILSAs) such as TIMSS and PISA provide comparative indicators and trend information on educational systems. Scholars repeatedly claimed that ILSAs should be based on concepts from Educational Effectiveness Research (EER). At the same time, ILSAs can contribute to the further development of EER by providing data, triggering new research studies, and providing instruments which work across cultures. When using ILSA data, however, researchers need to cope with limitations regarding design, sampling, and measurement. Cross-sectional data from individual ILSAs, with little information on students’ learning paths, rarely allow for estimating the effects of policies and practices on student outcomes. Rather, ILSAs inform about the distribution of educational opportunities among students, schools, and regions. Effects of national policies may be identified through country level trend data, if ecological fallacies can be avoided.- In an attempt to illustrate methodological problems and discuss relationships between ILSAs and EER, the present chapter uses a specific showcase: policies and practices of educational assessment. Several related measures were implemented in PISA. Reanalyzing these data, the chapter identifies national patterns of classroom assessment practices, use of assessment, school evaluation and accountability policies. E.g., “soft accountability” (comparing performance with a national standard) is discriminated from “strong accountability” (making test results public); soft accountability was related to country-level growth in achievement. English-speaking countries turned out to show similar patterns, while full invariance with regard to student-perceived assessment and feedback could be established for four (non-American) countries only.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call