Abstract

Much progress has been achieved during the past two decades in audio-visual automatic speech recognition (AVASR). However, challenges persist that hinder AVASR deployment in practical situations, most notably, robust and fast extraction of visual speech features. We review our efforts in overcoming this problem, based on an appearance-based visual feature representation of the speaker's mouth region. We cover three topics in particular. Firstly, we discuss AVASR in realistic, visually challenging domains, where lighting, background, and head-pose vary significantly. To enhance visual-front-end robustness in such environments, we employ an improved statistical-based face detection algorithm that significantly outperforms our baseline scheme. However, visual-only recognition remains inferior to visually clean (studio-like) data, thus demonstrating the importance of accurate mouth region extraction. We then consider a wearable audio-visual sensor to capture the mouth region directly, thus eliminating face detection. Its use improves visual-only recognition, even over full-face videos recorded in the studio-like environment. Finally, we address the speed issue in visual feature extraction, by discussing our real-time AVASR prototype implementation. The reported progress demonstrates the feasibility of practical AVASR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call