Abstract

Individuals with hearing impairment typically face difficulties in communicating with hearing individuals and during the acquisition of reading and writing skills. Widely adopted by the deaf, Sign Language (SL) has a grammatical structure where facial expressions assume grammatical and affective functions, differentiate lexical items, participate in syntactic construction, and contribute to intensification processes. Automatic Sign Language Recognition (ASLR) technology supports the communication between deaf and hearing individuals, translating sign language gestures into written or spoken sentences of a target language. The recognition of facial expressions can improve ASLR accuracy rates. There are cases where the absence of a facial expression can create wrong translations, making them necessary for the understanding of sign language. This paper presents an approach to facial recognition for sign language. Brazilian Sign Language (Libras) is used as a case study. In our approach, we code Libras’ facial expression using the Facial Action Coding System (FACS). In the paper, we evaluate two convolutional neural networks, a standard CNN and hybrid CNN+LSTM, for AU recognition. We evaluate the models on a challenging real-world video dataset of facial expressions in Libras. The results obtained were 0.87 f1-score average and indicated the potential of the system to recognize Libras’ facial expressions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call