Abstract

Lipreading refers to recognizing the speaker's speech content through the image sequence of lip movement without the speech signal. Currently, most models use a spatiotemporal (3D) convolutional layer combined with 2D CNN to extract spatial and temporal features from image sequences. However, compared with 2D convolutional layers, which can extract fine-grained spatial features from the spatial domain, the single-layer 3D convolutional layer used in the model cannot extract temporal information well. This point is improved in this paper. Firstly, the Time Shift Module (TSM) is applied to two different front-ends (full 2D CNN based and mixture of 2D and 3D convolution) to enhance the ability of time information extraction. Secondly, the influence of different shift proportion of TSM and different sampling interval input on extracting time information is verified. Thirdly, the influence of different time shifts on the ability of spatiotemporal feature extraction is compared. The proposed method verified on two challenging word-level lipreading datasets LRW and LRW-1000 and achieved new state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call