Abstract

Sign language recognition is of great significance to connect the hearing/speech impaired and non-sign language communities. Compared to isolated word recognition, sentence recognition is more practical in real-world scenarios, but is also more complicated because continuous, high-quality sign data with distinct features must be collected and isolated signs must be identified with high accuracy. Here, we propose a wearable sign language recognition system enabled by a convolutional neural network (CNN) that integrates stretchable strain sensors and inertial measurement units attached to the body to perceive hand postures and movement trajectories. Forty-eight Chinese sign language words commonly used in daily life were collected and used to train the CNN model, and an isolated sign language word recognition accuracy of 95.85% was achieved. For sentence-level sign language recognition, we proposed a method that combines multiple sliding windows and uses correlation analysis to improve the CNN recognition performance, achieving a correct rate of 84% for 50 sign language sentence samples, showing good extendibility.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.