Abstract

Video information has been widely introduced to speech enhancement as its contribution at low signal-to-noise ratios (SNRs). Conventional audio-visual speech enhancement networks take noisy speech and video as input and learn features of clean speech directly. To reduce the large SNR gap between the learning target and input noisy speech, we propose a novel mask-based audio-visual progressive learning speech enhancement (AVPL) framework with visual information reconstruction (VIR) to increase SNRs gradually. Each stage of AVPL takes a concatenation of pre-trained visual embedding and the previous representation as input and predicts a mask with the intermediate representation of the current stage. To extract more visual information and deal with the performance distortion, the AVPL-VIR model reconstructs the visual embedding as it is fed in for each stage. Experiment on the TCD-TIMIT dataset shows that the progressive learning method significantly outperforms direct learning for both audio-only and audio-visual models. Moreover, by reconstructing video information, the VIR module provides a more accurate and comprehensive representation of the data, which in turn improves the performance of both AVDL and AVPL.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.