Abstract

Sign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and the non-signers. The general glove solutions, which are employed to detect motions of our dexterous hands, only achieve recognizing discrete single gestures (i.e., numbers, letters, or words) instead of sentences, far from satisfying the meet of the signers’ daily communication. Here, we propose an artificial intelligence enabled sign language recognition and communication system comprising sensing gloves, deep learning block, and virtual reality interface. Non-segmentation and segmentation assisted deep learning model achieves the recognition of 50 words and 20 sentences. Significantly, the segmentation approach splits entire sentence signals into word units. Then the deep learning model recognizes all word elements and reversely reconstructs and recognizes sentences. Furthermore, new/never-seen sentences created by new-order word elements recombination can be recognized with an average correct rate of 86.67%. Finally, the sign language recognition results are projected into virtual space and translated into text and audio, allowing the remote and bidirectional communication between signers and non-signers.

Highlights

  • Sign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and the nonsigners

  • By referring to the frequently used sign language in the American Sign Language guide book[64], we conduct the analysis for involved motions in daily sign expressions of the speech/hearing impaired

  • The hand motion accounts for a proportion of 43%, remaining dominant in the three major motions, indicating the inevitability of hand motion perception in the application of sign language recognition

Read more

Summary

Introduction

Especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and the nonsigners. TENG gloves were extensively demonstrated monitoring finger motions by magnitude analysis or pulse counting[47,48,49,50,51] Their data analytics are mostly based on manual simple feature extraction (e.g., amplitude, frequency, peak number), leading to a limited variety of recognizable hand motions/gestures with substantial feature loss. A human-skin-inspired interlocked triboelectric sensor mounted on a glove has been reported to recognize four discrete words: ‘I’, ‘Happy’, ‘Meet’, and ‘You’[60] In another case, there is a developed TENG glove with only achieving five letters recognition of sign language[61]. It creates the potential of human-to-human (e.g., the speech/hearing disordered and healthy people) interaction, which improves the practicality of sign language recognition system

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call