People with hearing and speech disabilities in Arab society have obstacles to communicate since Arabic sign language (ArSL) has not been widely understood among society's members. Using technology to translate hand gestures from (ArSL) to written Arabic language can help bridge communication gaps and break down disability barriers in Arab society. latest research utilizes the camera to record the user’s hand features to recognize its gestures. In this research a software was developed to recognize ArSL hand gestures in real time and immediately translate them to written alphabet Arabic letters utilizing Convolutional Neural Networks (CNNs), Teachable Machine. Our methodology involves data collection and preparation, Therefore an ArSL alphabet database has been created, with 28 categories representing the Arabic letters. For each letter, 400 images were captured. Then 87.5% of the dataset was passed to Google’s Teachable Machine for training process, after that the dataset was tested and evaluated. Finally, the remaining 12.5% of dataset were used on local host for testing the model generated. The accuracy value for the model on local host was 92%. The experimental output shows the proposed model has good performance results in real time where the average recognition accuracy was 93.8%.
Read full abstract