Abstract

Lip reading is a widely used technology, which aims to infer text content from visual information. To represent lip information more efficiently and reduce the network parameters, most networks will first extract features from lip images and then classify the features. In recent studies, most researchers adopt convolutional networks to extract information from pixels which contain a lot of useless information, limiting the improvement of model accuracy. In this paper, we designed a graph structures and a lip segmentation network to effectively represent changes in the shape of the lips in adjacent frames and the ROI in local frame and propose two feature extractors, named U-net-based local feature extractor and graph-based adjacent feature extractor. We proposed a very challenging dataset to simulate extreme environments, including highly variable face properties, light intensity and so on. Finally, we designed several different levels of feature fusion methods. The experimental results on the proposed challenging dataset show that the model can effectively extract the useful information from content irrelevant information very well. The accuracy of our proposed model is 9.1% higher than that of baseline. This result shows that our proposed model can better adapt to the application of the wild environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call