Abstract

Automatic Speech Recognition (ASR) technology has become a part of contemporary Computer-Assisted Language Learning (CALL) systems. ASR systems however are being criticized for their erroneous performance especially when utilized as a mean to develop skills in a Second Language (L2) where errors are not tolerated. Nevertheless, these errors can provide useful information and propose further implications. In this study we investigate the relationships between the underlying features causing ASR errors and those that make L2 listening difficult. This research is inspired by the comparable nature of the difficulties both ASR and L2 listeners encounter in recognizing speech. The aim of this study is to enhance Partial and Synchronized Caption (PSC) systems, which we previously developed for fostering L2 listening skill. PSC presents only a selective set of words (those leading to listening difficulties) in order to encourage listening to the audio and read for problematic words only. To enhance PSC's word selection, we strive to detect individual difficult sentences/words in terms of recognition by referring to ASR errors. Our system compares these errors with PSC choices to find the overlaps and seek further enhancement. The results revealed a close relationship between ASR errors and factors leading to L2 listening difficulties. The findings indicated that ASR errors can contribute to word selection in PSC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call