Abstract

Recognition of Lip movements has become one of the most challenging tasks and has crucial applications in the contemporary scenario. It is the recognition of the speech uttered by individual using visual cues. Visual interpretation of lip movement is especially useful in scenarios like video surveillance, where auditory signals are either not available or too noisy for interpretation. It is also useful for hearing-impaired individuals where audio signal is of no use. Many developments have taken place in this nascent field using various deep learning-based techniques. This research does analysis over various state-of-the-art deep-learning models on MIRACL-VC1 dataset. This study also aims to find out the optimal baseline architecture suitable for building a new model with high accuracy for lip movement detection. The models are trained from scratch over the pre-processed MIRACL-VC1 dataset consisting of small-size images. Experimental observations with state-of-the-art deep learning models indicate that EfficientNet B0 architecture yielded an accuracy of 80.13%. Thus, EfficientNet B0 is further utilized as baseline deep architecture to design a customized model for effective detection. This research proposes an attention based deep learning model combined with Long Short-Term Memory (LSTM) layer having EfficientNet B0 as the backbone architecture. The proposed model yielded an accuracy of 91.13%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.