Abstract

Sign language is a visual language used by people with hearing impairments. It can help people achieve communication. Due to the lack of popularity of sign language, it is of great social significance to develop a human-computer interaction system that provides a platform for people with hearing impairments to communicate with other people. At present, most of the research on sign language recognition is based on the traditional machine learning non-end-to-end system, which requires a lot of manual design work, and the generalization ability of the model is poor. This paper proposes a system using the Residual Neural Network to implement end-to-end recognition of American Sign language(ASL). The sign language dataset in this work consists of 36 classes static sign language words, including 0-9 numbers and English letters A-Z. We obtained 17640 sign language images as model training data through data enhancement methods. Our Proposed method have got 99. 4% accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call