Abstract

Across the world, several millions of people use sign language as their main way of communication with their society, daily they face a lot of obstacles with their families, teachers, neighbours, employers. According to the most recent statistics of World Health Organization, there are 360 million persons in the world with disabling hearing loss i.e. (5.3% of the world’s population), around 13 million in the Middle East. Hence, the development of automated systems capable of translating sign languages into words and sentences becomes a necessity. We propose a model to recognize both of static gestures like numbers, letters, ...etc and dynamic gestures which includes movement and motion in performing the signs. Additionally, we propose a segmentation method in order to segment a sequence of continuous signs in real time based on tracking the palm velocity and this is useful in translating not only pre-segmented signs but also continuous sentences. We use an affordable and compact device called Leap Motion controller, which detects and tracks the hands' and fingers' motion and position in an accurate manner. The proposed model applies several machine learning algorithms as Support Vector Machine (SVM), K- Nearest Neighbour (KNN), Artificial Neural Network (ANN) and Dynamic Time Wrapping (DTW) depending on two different features sets. This research will increase the chance for the Arabic hearing-impaired and deaf persons to communicate easily using Arabic Sign language(ArSLR). The proposed model works as an interface between hearing-impaired and normal persons who are not familiar with Arabic sign language, overcomes the gap between them and it is also valuable for social respect. The proposed model is applied on Arabic signs with 38 static gestures (28 letters, numbers (1:10) and 16 static words) and 20 dynamic gestures. Features selection process is maintained and we get two different features sets. For static gestures, KNN model dominates other models for both of palm features set and bone features set with accuracy 99 and 98% respectively. For dynamic gestures, DTW model dominates other models for both palm features set and bone features set with accuracy 97.4% and 96.4% respectively.

Highlights

  • Sign language is the most common and important way for deaf and hearing impaired in order to communicate and integrate with their society

  • Several experiments are performed to test the proposed models, the experiments include both static gestures and dynamic gestures i.e., Arabic alphabets, Arabic numerical and the common signs used with Dentist for static gestures and common verbs and nouns using one hand and two hands for dynamic gestures

  • We developed a model for Arabic sign recognition using the leap motion controller (LMC)

Read more

Summary

Introduction

Sign language is the most common and important way for deaf and hearing impaired in order to communicate and integrate with their society. In Egypt, the number of deaf people according to "Central Agency for Public Mobilization and Statistics" last study is around 2 million and increased in 2012 to be close to 4 million (http://www.who.int/mediacentre/factsheets/fs300/en/). Most of these people cannot read or write Arabic language and 80% of them are literal, they are isolated from their society. They are a large part of society and cannot be neglected still they cannot communicate normally with their community

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call