Abstract

The present time, Sign language is very important for people who suffer from hearing loss or who cannot speak. Normal humans tend to disregard the significance of signal language, which is a mere supply of communique to mute and deaf societies. So, this study proposes a developed model for sign Language Recognition for Arabic using the Deep learning Convolutional Neural Network (CNN) algorithm. Then set the algorithm by developing programming on Open-CV, using Python language. The dataset contains 54049 snapshots of Arabic signal language alphabets. The 32 folders were created, and each one of them included 1500 images incorporating hand gestures in at-variance environments. The data set was divided into a training section with a percentage of 70 %, a section for testing with a percentage of 20 %, and a section for validation with a percentage of 10 %. The results show that the suggested model achieved an accuracy rate of 94.8%, and it has proven its effectiveness and success, especially after being tried and tested by several users and obtaining their comments and feedback.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call