Abstract

Lipreading aims to decode the speech content from a moving mouth. It is a very challenging task because lip appearance variations and speech contents are coupled together in the subtle movements of lip region. Especially in the speaker-independent recognition scenario, training and testing data are totally different in distribution due to the diverse speaker identities, making the learned model generalize poorly in the testing task. We propose a Siamese decoupling lipreading network (SDLipNet) to address this problem. Specially, we exploit an encoder–decoder framework to establish a collaborative representation of speaker identities and speech contents, and utilize the identity-specific information to regularize the content feature space. The identity features are derived from a Siamese identity encoder trained with paired visual speech data from different speakers. In addition, we align the content representation with a prior Gaussian distribution by imposing a Kullback–Leibler divergence constraint between the two outputs of the Siamese content encoder. In this way, the learned content feature space is supposed to be universal to the target speaker domain. Extensive experiments on two lipreading benchmarks demonstrate that our proposed SDLipNet can achieve better performance in the speaker-independent recognition task compared with the state-of-the-art lipreading methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.