Abstract

The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The system aims to bridge the communication gap by identifying and converting right-hand gestures into audible sounds or displayed text. To achieve the objectives, a convolutional neural network (CNN) model is employed. The model is trained to recognize right-hand gestures captured by an affordable web camera. A dataset was created with the help of six volunteers for training, testing, and validation purposes. The proposed system achieved an impressive average accuracy of 99.65 % in recognizing right-hand gestures, with high precision value of 95.11 %. The system effectively addressed the issue of gesture similarity between certain alphabets by successfully distinguishing between their respective gestures. The proposed system offers a promising solution for automatic sign language recognition, benefiting the mute-deaf community in Egypt. By accurately identifying and converting right-hand gestures, the system facilitates communication and interaction with the wider world. This technology has the potential to greatly enhance the quality oflife for individuals who are unable to speak or hear, promoting inclusivity and accessibility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call