Abstract

The research presents a general overview of sign languages, and a previous survey was conducted on all aspects of sign languages including the tools used to collect sign languages and the best algorithms to achieve the best results. A specialized database is prepared to combine the alphabet signs of the Arabic, American, and British languages, as they are the most important sign languages and the most widespread in the world. Based on different sign languages and deep learning techniques such as LeNet, VGG-16, and CapsNet, which are considered among the best methods for solving sign language problems based on our previous studies. The purpose of the research is to remove the communication gap between the deaf, and normal speaking people who speak one sign language or those who try to communicate from different countries and to identify these languages easily. We applied some of the traditional deep learning techniques such as LeNet, and then we applied VGG-16 using pre-training models and adjusted some layers to suit our problem. Also we applied CapsNet as it is perfectly suitable for solving the problem of sign language deformation, rotation, and scaling. The best results were achieved using VGG-16, as it was trained on a previous database like ImageNet, which contains millions of images. We got an accuracy of 99.69% when training the model of VGG-16, and an accuracy of 99.65% when testing the model. On the other hand, we got lower accuracies in CapsNet and LeNet compared to VGG-16. We got 96.54%, 97.45%, and 94.95% on BSL, ASL, and ArSL respectively while applying LeNet model, while we got 98.4848%, 98.4286%, and 99.5652% on ArSL, ASL, and BSL respectively while applying CapsNet model. Using VGG-16 we got 99.05%, 98.50%, and 99.69% on ArSL, ASL, and BSL respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call