Abstract

AbstractThis report is the fifth in a series concerning English language proficiency (ELP) assessments for English learners (ELs) in kindergarten through 12th grade in the United States. The series, produced by Educational Testing Service (ETS), is intended to provide theory‐ and evidence‐based principles and recommendations for improving next‐generation ELP assessment systems, policies, and practices and to stimulate discussion on better serving K–12 EL students. The first report articulated a high‐level conceptualization of next‐generation ELP assessment systems (Hauck, Wolf, & Mislevy, 2016). The second report addressed accessibility issues in the context of ELP assessments for ELs and ELs with disabilities (Guzman‐Orth, Laitusis, Thurlow, & Christensen, 2016). The third report focused on critical policy and research issues of summative ELP assessments that state use for accountability purposes (Wolf, Guzman‐Orth, & Hauck, 2016). The fourth report dealt with one of the major uses of ELP assessments—the initial identification and classification of ELs (Lopez, Pooler, & Linquanti, 2016). The present report discusses approaches to using automated scoring technology for the evaluation of student‐spoken responses on K–12 ELP assessments. As many states have begun to use computer‐based ELP assessments, there is a growing interest in automated scoring of spoken responses to increase the efficiency of scoring. This report delineates major areas to consider in using automated speech scoring for K–12 ELP assessments (i.e., assessment construct and task design, scoring and score reporting, and artificial intelligence (AI) model development and test delivery) and makes recommendations for states on how to evaluate these considerations and determine a path forward.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call