Abstract

Recognition and understanding of sign language can aid communication between nondeaf and deaf people. Recently, research groups have developed sign language recognition algorithms using multiple sensors. However, in everyday life, minimizing the number of sensors would still require the use of a sign language interpreter. In this study, a sign language classification method was developed using an accelerometer to recognize the Korean sign language alphabet. The accelerometer is worn on the proximal phalanx of the index finger of the dominant hand. Triaxial accelerometer signals were used to segment the sign gesture (i.e., the time period when a user is performing a sign) and recognize the 31 Korean sign language letters (producing a chance level of 3.2%). The vector sum of the accelerometer signals was used to segment the sign gesture with 98.9% segmentation accuracy, which is comparable to that of previous multisensor systems (99.49%). The system was able to classify the Korean sign language alphabet with 92.2% accuracy. The recognition accuracy of this approach was found to be higher than that of a previous work in the same sign language alphabet classification task. The findings demonstrate that a single-sensor accelerometer with simple features can be reliably used for Korean sign language alphabet recognition in everyday life.

Highlights

  • Hearing-impaired or deaf people generally use sign language and finger spelling to communicate with others

  • We developed a novel approach using a single accelerometer-based sensor worn on the index finger to recognize the Korean sign language alphabet

  • All support vector machine (SVM) achieved better results than those predicted by random chance (p < 0:001, one-sample t-test)

Read more

Summary

Introduction

Hearing-impaired or deaf people generally use sign language and finger spelling to communicate with others. Their communication with those who are not familiar with sign language is limited. Various approaches have been developed to understand or recognize sign language or finger spelling using computer vision and wearable sensor systems [1,2,3,4]. Tao et al developed an American sign language alphabet recognition system using Microsoft Kinetic motion data [2]. Such a vision-based system does not require signers to use complicated instruments. The motion data-based recognition systems achieved reliable recognition accuracy, they had their own limitations in daily life

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call