Abstract
Speech recognition errors have been shown to negatively correlate with user satisfaction in evaluations of task-oriented spoken dialogue systems. In the domain of tutorial dialogue systems, however, where the primary evaluation metric is student learning, there has been little investigation of whether speech recognition errors also negatively correlate with learning. In this paper we examine correlations between student learning and automatic speech recognition performance, in a corpus of dialogues collected with an intelligent tutoring spoken dialogue system. We examine numerous quantitative measures of speech recognition error, including rejection versus misrecognition errors, word versus sentence-level errors, and transcription versus semantic errors. Our results show that although many of our students experience problems with speech recognition, none of our measures negatively correlates with student learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have