Abstract

Deaf and hard-of-hearing persons practice sign language to converse with one other and with others in their community. Even though innovative and reachable technology is evolving to assist persons with hearing impairments, there is more scope of effort to be achieved. Computer vision applications with machine learning procedures could benefit such persons even more by allowing them to converse more effectively. That is precisely what this chapter attempts to do. The authors have suggested a MobileConvNet model that could recognise hand gestures in American Sign Language. MobileConvNet is a streamlined architecture that constructs lightweight deep convolutional neural networks using depthwise separable convolutions and provides an efficient model for mobile and embedded vision applications. The difficulties and limitations of sign language recognition are also discussed. Overall, it is intended that the chapter will give readers a thorough overview of the topic of sign language recognition as well as aid future research in this area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call