Abstract

This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, Speech Assessment for Moodle (SAM), is an open-source solution developed by the author that makes use of Google’s speech recognition engine to transcribe speech into text which is then automatically scored using a phoneme-based algorithm. SAM is designed as a custom quiz type for Moodle, a widely adopted open-source course management system. The second auto-scoring system, EnglishCentral, is a popular proprietary language learning solution which utilizes a trained intelligibility model to automatically score speech. Results of this study indicated a positive correlation between the speaking scores generated by both systems, meaning students who scored higher on the SAM speaking tasks also tended to score higher on the EnglishCentral speaking tasks and vice versa. In addition to comparing the scores generated from these two systems against each other, students’ computer-scored speaking scores were compared to human-generated scores from small-group face-to-face speaking tasks. The results indicated that students who received higher scores with the online computer-graded speaking tasks tended to score higher on the human-graded small-group speaking tasks and vice versa.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call