Abstract

Visual features are attractive cues that can be used for robust automatic speech recognition (ASR). In particular, speech recognition performance can be improved by combining audio with visual information obtained from the speaker’s face rather than using only audio in acoustically unfavorable environments. For this reason, recently, studies on various audio-visual speech recognition (AVSR) models have been actively conducted. However, from the experimental results of the AVSR models, important information for speech recognition is mainly concentrated on audio signals, and visual information plays a role in enhancing the robustness of recognition when the audio signal is corrupted in noisy environments. Therefore, there is a limit to the improvement of the recognition performance of conventional AVSR models in noisy environments. Unlike the conventional AVSR models that directly use input audio-visual information as it is, in this paper, we propose an AVSR model that first performs AVSE to enhance target speech based on audio-visual information and then uses both audio information enhanced by the AVSE and visual information such as the speaker’s lips or face. In particular, we propose a deep AVSR model that performs end-to-end training as one model by integrating an AVSR model based on the conformer with hybrid decoding and an AVSE model based on the U-net with recurrent neural network (RNN) attention (RA). Experimental results on the LRS2-BBC and LRS3-TED datasets demonstrate that the AVSE model effectively suppresses corrupting noise and the AVSR model successfully achieves noise robustness. Especially, the proposed jointly trained model integrating the AVSE and AVSR stages into one model showed better recognition performance than the other compared methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call