Abstract

Purpose Articulation errors substantially reduce speech intelligibility and the ease of spoken communication. Moreover, the articulation learning process that speech-language pathologists must provide is time consuming and expensive. The purpose of this paper, to facilitate the articulation learning process, is to develop a computer-aided articulation learning system to help subjects with articulation disorders. Design/methodology/approach Facial animations, including lip and tongue animations, are used to convey the manner and place of articulation to the subject. This process improves the effectiveness of articulation learning. An interactive learning system is implemented through pronunciation confusion networks (PCNs) and automatic speech recognition (ASR), which are applied to identify mispronunciations. Findings Speech and facial animations are effective for assisting subjects in imitating sounds and developing articulatory ability. PCNs and ASR can be used to automatically identify mispronunciations. Research limitations/implications Future research will evaluate the clinical performance of this approach to articulation learning. Practical implications The experimental results of this study indicate that it is feasible for clinically implementing a computer-aided articulation learning system in learning articulation. Originality/value This study developed a computer-aided articulation learning system to facilitate improving speech production ability in subjects with articulation disorders.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call