Abstract

In computer vision, accurate recognition and classification of hand gestures play a vital part for developing gesture recognition systems for human–computer interaction (HCI). Error-free HCI can considerably improve systems that can recognize diverse class of hand gesture characters. In this manuscript, we propose a modified deep learning architecture based on AlexNet for recognizing American Sign Language (ASL) gestures. The objective of static gesture recognition is to characterize the data given by the hand gesture into predefined gesture classes. The ASL dataset taken in our methodology is composed of five different individuals. In contrast to most of the prior works, where data is split randomly for training and testing, we have trained the network on samples of four individuals and tested on the fifth individual. We have taken the original AlexNet architecture and modified the final fully connected layers to classify the ASL dataset. The modified network is trained using ASL dataset and is compared to the state-of-the-art AlexNet architecture to demonstrate the work's competence. The paper also demonstrates the training testing performance of the modified network with the ASL dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call