Abstract

A Proposed Model for Standard Arabic Sign Language Recognition Based on Multiplicative Neural Network

Highlights

  • There are 150 million deaf people around the world, including three millions in Egypt [1]

  • This study concludes that most gestures are a combination of 158 postures, 88 single hand postures (44 left hand gestures, 44 right hand gestures) and 70 postures use both hands. 250 gestures were captured for this study

  • This work presented an implementation for Arabic sign language recognition; pulse coupled neural network (PCNN) has been used to extract static posture features and Multiplicative Neural Network (MNN) as a classifier

Read more

Summary

INTRODUCTION

There are 150 million deaf people around the world, including three millions in Egypt [1]. When Microsoft released "Kinect" in November 2010 [7], it mainly targeted consumers owning a Microsoft Xbox 360 console, allowing the user to interact with the system using gestures and speech [7].The device itself features an RGB camera, a depth sensor and a multi-array microphone and is capable of tracking the user’s body movements This new technology encouraged researchers in sign language recognition to customize it in real time recognition. In Arabic Sign Language, the two hands can be used interchangeably to express the same gesture This property makes two gestures representing the same meaning, Fig 2. An example of the modified PCNN neuron architecture is shown in Fig 5 as a schematic block diagram of the modified PCNN neuron as described through (1) - (5)

PCNN and Feature Generation
Sign Language Recognition and Graph Matching
EXPERIMENTAL RESULTS
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call