Abstract

Facial expressions are fundamental in Sign Languages (SLs), which are visuospatial linguistic systems structured on gestures, adopted around the world by deaf people to communicate. Deaf individuals frequently need a sign language interpreter in their access to school and public services. In such scenarios, the absence of interpreters typically results in discouraging experiences. Developments in Automatic Sign Language Recognition (ASLR) can enable newer assistive technologies and change the interaction of the deaf with the world. One major barrier to improving ASLR is the difficulty in obtaining sets of well-annotated data. We present a newly developed video database of Brazilian Sign Language facial expressions in a diverse group of deaf and hearing young adults. Well-validated sentences stimulus were used to elicit affective and grammatical facial expressions. Frame-work ground-truth for facial actions was manually annotated using the Facial Action Coding System (FACS). Also, the work promotes the exploration of discriminant features in subtle facial expression in sign language, a better understanding of the relation between grammatical facial expression class dynamics in facial action units, and a deeper understanding of its facial action occurrence. To provide a baseline for use in future research, protocols and benchmarks for automated action unit recognition are reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call