Abstract

High stakes test-based accountability systems primarily rely on aggregates and derivatives of scores from tests that were originally developed to measure individual student proficiency in subject areas such as math, reading/language arts, and now English language proficiency. Current validity models do not explicitly address this use of aggregate scores in accountability. Historically, language testing and educational measurement have been related, yet parallel disciplines. Accountability policies have increasingly forced these disciplines under one umbrella with a common system of rewards and sanctions based on results achieved. Therefore, a validity framework, as suggested in the present paper, is relevant to both.

Highlights

  • Historical and contemporary theories of validity and validation were designed with individual test scores in mind, but in accountability, these scores are aggregated to create a score or index at the school or teacher level

  • To address the gaps in prevalent validity models, the Interpretive Use Argument (IUA) is reconceptualized to offer a systematic approach for building a validity argument that begins in the test design and development phase, includes a parallel process for building validity evidence for aggregate scores, and considers the consequences of accountability systems

  • Under Every Student Succeeds Act (ESSA), schools are accountable for the progress of English learners (EL) in English language proficiency, math, reading/language arts, and in other accountability indicators such as graduation rates

Read more

Summary

Introduction

Historical and contemporary theories of validity and validation were designed with individual test scores in mind, but in accountability, these scores are aggregated to create a score or index at the school or teacher level. Kane (2006, 2010, 2012, 2013, 2015, 2020) discusses test development, consequences, and accountability in his writing, but his Interpretive Use Argument (IUA) does not explicitly address the needs for validity evidence in these areas. To address the gaps in prevalent validity models, the IUA is reconceptualized to offer a systematic approach for building a validity argument that begins in the test design and development phase, includes a parallel process for building validity evidence for aggregate scores, and considers the consequences of accountability systems. Indices synthesize data based on decision rules to provide a single score that is used to make judgments about educational quality and student success (Standards, 2014) It is the interpretation and use of these indices that must be validated as an argument that is built for the system as a whole (Kane, 2013). It is important to consider the effects of external factors, such as educational opportunity, English learner status, race, and socioeconomic status on aggregate score interpretations

Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.