Abstract

Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office.

Highlights

  • Communication connects people by allowing them to convey messages to each other, to express their inner feelings, and to exchange thoughts, either verbally or non-verbally

  • The results indicated that the deep neural network (DNN) classifier had the best performance with the C6 feature group at 93.81% and 88.79% for the 26 classes (26 letters only) and 36 classes (26 letters and 10 digits), respectively

  • C6 was still the best-performing feature group for the 36 classes with the support vector machine (SVM) at an accuracy rate of 72.79% compared to the other feature groups

Read more

Summary

Introduction

Communication connects people by allowing them to convey messages to each other, to express their inner feelings, and to exchange thoughts, either verbally or non-verbally. Sign language is a non-verbal language that expresses one’s meaning by involving the movement of fingers, hands, arms, head, and body, as well as through facial expressions [1]. It can be very challenging for society to learn and practice sign language. With the advancement of human–computer interaction technology over the past decades, humans have been able to interact with computers and, in return, receive feedback from the computer With such methods, many sign recognition systems have been proposed that capture human gestures and analyze and provide the recognized sign language output, either by text or verbally

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call