Abstract

ABSTRACT As a branch of artificial intelligence, automated speech recognition (ASR) technology is increasingly used to detect speech, process it to text, and derive the meaning of natural language for various learning and assessment purposes. ASR inaccuracy may pose serious threats to valid score interpretations and fair score use for all when it is exacerbated by test takers’ characteristics, such as language background and accent, and assessment task type. The present study investigated the extent to which speech-to-text accuracy rates of three major ASR systems vary across different oral tasks and students’ language background variables. Results indicate that task types and students’ language backgrounds have statistically significant main and interaction effects on ASR accuracy. The paper discusses the implications of the study results for applying ASR to computerized assessment design and automated scoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call