Abstract

Sign language is a means of communication between the deaf community and normal hearing people who use hand gestures, facial expressions, and body language to communicate. It has the same level of complexity as spoken language, but it does not employ the same sentence structure as English. The motions in sign language comprise a range of distinct hand and finger articulations that are occasionally synchronized with the head, face, and body. Existing sign language recognition systems are mainly camera-based, which have fundamental limitations of poor lighting conditions, potential training challenges with longer video sequence data, and serious privacy concerns. This study presents a first of its kind, contact-less and privacy-preserving British sign language (BSL) Recognition system using Radar and deep learning algorithms. Six most common emotions are considered in this proof of concept study, namely confused, depressed, happy, hate, lonely, and sad. The collected data is represented in the form of spectrograms. Three state-of-the-art deep learning models, namely, InceptionV3, VGG19, and VGG16 models then extract spatiotemporal features from the spectrogram. Finally, BSL emotions are accurately identified by classifying the spectrograms into considered emotion signs. Comparative simulation results demonstrate that a maximum classifying accuracy of 93.33% is obtained on all classes using the VGG16 model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call