Abstract

ABSTRACT Purpose: This study builds a new system for automatically assessing learners’ speech elicited from an oral discourse completion task (DCT), and evaluates the prediction capability of the system with a view to better understanding factors deemed influential in predicting speaking proficiency scores and the pedagogical implications of the system. Methodology: We developed a system with a tripartite structure using an automatic speech recogniser, a set of modules that compute a number of speech features, and a scoring model. In total, 210 participants with intermediate English language proficiency level were administered a multi-turn oral DCT, a task which closely resembles discourse in real-life situations. The collected speech and transcribed files were eyeballed and rated by human raters first. Eighty percent of the original dataset was then used to train and examine our prediction model against the remainder of the dataset. Findings: The exact agreement between human and machine scores was 72%, moderately high, and comparable to the literature on automated speech scoring. It could provide a basis for deploying the system in a low-stakes practice environment. Originality/value: This study makes a unique contribution to the wider scholarship, where the use of single-turn DCTs remains prevalent, by presenting a new reliable scoring system for learner speech using an automated DCT with multiple turns. It offers useful insight into ways in which the system could be used in a low-stakes environment, including foreign language classroom settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call