Abstract

For the deaf population that speaks Arabic, Arabic Sign Language (ArSL) is an essential means of communication. This research presents a convolutional model for recognizing Arabic sign language because of the importance of clear communication. We hope to improve the deaf community's access to communication and broaden its sense of belonging by harnessing deep learning's power and fine-tuning the model to ArSL's particularities. To represent the complex hand movements and visual patterns that are characteristic of ArSL, the proposed model makes use of a variety of carefully made architectural decisions, such as the number of layers, the size of the kernels, the activation functions, and the pooling approaches. Our model outperforms state-of-the-art machine learning techniques, as shown by experimental findings on a large dataset. These results not only lay the groundwork for future developments in sign language recognition, but also demonstrate the promise of our technique in improving communication for the Arabic-speaking deaf community.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call