Abstract

Intelligent machine translation systems have a remarkable importance in integrating people with disabilities in community. Arabic to Arabic sign language systems are limited. Deep Learning (DL) was successfully applied to problems related to music information retrieval, image recognition and text recognition, but its use in sign language recognition is rare. This paper introduces an automatic virtual translation system from Arabic language into Arabic Sign Language (ASL) via a popular DL architecture: The Recurrent Neural Network (RNN). The proposed system uses a deep neural network training-based system for ASL that convolves RNN and Graphical Processing Unit (GPU) parallel processors. The system is evaluated using both objective and subjective measures. Obtained results are towards reducing errors, speeding up avatar and expressing signs and facial expressions in a well-received manner by Deaf. The signing avatar is highly encouraged as a simulator for natural human signs.

Highlights

  • Every year in the US, more than 12,000 babies are born with hearing loss disability

  • The Training-Based translation depends on the existence of a bilingual corpus: A set of Arabic sentences written in different forms and their corresponding Arabic Sign Language (ASL)

  • 54 Arabic sentences are the benchmark, generating more than 1000 data set for training

Read more

Summary

Introduction

Every year in the US, more than 12,000 babies are born with hearing loss disability (http://www.parentcenterhub.org/wpcontent/uploads/repo_items/fs3.pdf). Extend the adaptation-based Arabic sign language interpreter with limited corpora (Mohamed et al, 2016) into a better performance training-based system. DL methods construct new features by transforming input data through multiple layers of nonlinear processing This is accomplished by training large neural networks (NNET) with several hidden layers and data sets with very large sample sizes. Hopfield (2008), the Hopfield Network or Hopfield Model is one good way to implement an associative memory It is a fully connected RNN where activations are normally ±1 (defined using the signum function), rather than 0 and 1, so the neuron activation equation is:. The section approaches the proposed trainingbased DNN system This uses a RNN that classifies recorded signing videos to their relevant text (from a speech recognizer). All target signing videos are transformed into signing animations (avatar)

A Proposed Training-Based DNN Model
Results
Parallel DNN Results
Objective
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call