Abstract

This proposed work presents a frame work of mouth gesture recognition for Human Computer Interface (HCI). It replaces the traditional input devices such as mouse and keyboard which allows a user to work on a computer using his/her mouth gestures. This work is aimed at helping severely disabled and paralyzed people. The entire work includes mouth detection, region extraction, gesture classification, and interface creation with computer applications. Initially face and mouth regions are detected using Haar-cascaded classifier. Secondly, the gesture recognition is done using the concept of Deep learning through Convolutional Neural Network (CNN). The mouth gestures are recognized and classified as mouth close, mouth open, tongue left and tongue right. Finally an HCI is created by mapping the mouth gestures into VLC player operations such as play, pause, forward jump and backward jump. The performance of the proposed method is measured and compared with other existing methods. This work is found to perform better than the other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call