Abstract

Sign language is a bridge for communication be-tween people with normal hearing and impaired hearing. How-ever, only a few people with normal hearing have an understanding of sign language. Thus, Sign Language Recognition (SLR) has attracted significant interest from both industry and academia. Prior traditional methods have certain limitations such as light condition and extra hardware. To address these issues, this study aims to recognize sign languages using acoustic signals on a smartphone. We design an SLR system called SignGest, which captures user's sign language gestures with built-in microphones. Afterwards, we build a Convolutional Neural Network(CNN) model to extract features of different gestures to distinguish them. Furthermore, we use a Deep Convolutional Generative Adversarial Network(DCGAN) to generate abundant training data that look no different from the real samples. We handle various challenges including effective gesture segmentation and training data collection. Finally, SignGest is implemented on a server and an Android smartphone with built-in microphones and speakers without any extra hardware or infrastructure equipment. When users perform the sign language gestures before the smartphone, SignGest can recognize each of them and print the result. Through extensive experiments, our results show that SignGest can achieve robust and satisfactory performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call