Abstract

People with hearing and speech disabilities in Arab society have obstacles to communicate since Arabic sign language (ArSL) has not been widely understood among society's members. Using technology to translate hand gestures from (ArSL) to written Arabic language can help bridge communication gaps and break down disability barriers in Arab society. latest research utilizes the camera to record the user’s hand features to recognize its gestures. In this research a software was developed to recognize ArSL hand gestures in real time and immediately translate them to written alphabet Arabic letters utilizing Convolutional Neural Networks (CNNs), Teachable Machine. Our methodology involves data collection and preparation, Therefore an ArSL alphabet database has been created, with 28 categories representing the Arabic letters. For each letter, 400 images were captured. Then 87.5% of the dataset was passed to Google’s Teachable Machine for training process, after that the dataset was tested and evaluated. Finally, the remaining 12.5% of dataset were used on local host for testing the model generated. The accuracy value for the model on local host was 92%. The experimental output shows the proposed model has good performance results in real time where the average recognition accuracy was 93.8%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.