Abstract

We present a global overview of image- and video-processing-based methods to help the communication of hearing impaired people. Two directions of communication have to be considered: from a hearing person to a hearing impaired person and vice versa. In this paper, firstly, we describe sign language (SL) and the cued speech (CS) language which are two different languages used by the deaf community. Secondly, we present existing tools which employ SL and CS video processing and recognition for the automatic communication between deaf people and hearing people. Thirdly, we present the existing tools for reverse communication, from hearing people to deaf people that involve SL and CS video synthesis.

Highlights

  • This section gives a short description of the two possible communication languages used by hard of hearing people

  • The words are produced through the vocal tract and they are perceived as sounds; whereas in sign languages, the signs are produced alone or simultaneously, by use of hand shapes, hand motion, hand location, facial expression, head motion, and body posture, and they are perceived visually

  • The comprehension of entire paragraphs is, still deceiving; the prosodic cues helping the interlocutor to chunk the stream of gestural patterns in meaningful units are essential to comprehension

Read more

Summary

Introduction

This section gives a short description of the two possible communication languages used by hard of hearing people. The words are produced through the vocal tract and they are perceived as sounds; whereas in sign languages, the signs are produced alone or simultaneously, by use of hand shapes, hand motion, hand location, facial expression, head motion, and body posture, and they are perceived visually. CS brings the oral language accessible to the hearing impaired, by replacing invisible articulators that participate to the production of sound (vocal cords, tongue, and jaw) by hand gestures, while keeping visible articulators (lips) It complements the lip-reading by various manual gestures, so that phonemes which have similar lip shapes can be differentiated. In CS, information is shared between two modalities: the lip modality (related to lip shape and motion) and the hand modality (related to hand configuration and hand position with respect to the face)

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call