Abstract

Novel Algorithm Utilizing Deep Learning for Enhanced Arabic Lip Reading Recognition

Highlights

  • Computerized lip reading is the viseme understanding of lips movements and convert it to written text for both the hearing impaired and for other speakers in noisy environment [1]

  • We should understand that the results accuracies are attained after the pre-training process through fine tuning of the deep CNN, where it is unrolled to its Deep learning (DNN) architecture, and the decoder is used to convert the outputs to the appropriate classes

  • This research proposed an approach with two phases of viseme feature extraction and deep CNN classification

Read more

Summary

INTRODUCTION

Computerized lip reading is the viseme understanding of lips movements and convert it to written text for both the hearing impaired and for other speakers in noisy environment [1]. Real-time viseme recognition systems are very crucial in different paradigms such as surveillance, as a hearing aid utilized in video conferencing for video games with lip reading interfaces especially in noisy environments. Most of these applications anticipate a specific response time, usually real time. Real-time viseme recognition systems is driven by the lip reading movement recognition using video processing techniques. Skin color segmentation can differentiate between lip area and other areas in the video frames We can apply it to determine lip-like pixels in an image which can be a binary classification problem.

BACKGROUND
Viseme Feature Extraction
Deep CNN Network Classifier
Dataset
Base Lip Reading Recognition Model
Findings
DISCUSSION
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.