- The communication gap remains one of the most significant barriers between individuals with hearing and speech impairments and the broader society. This project addresses this challenge by developing a real-time Indian Sign Language (ISL) detection system that leverages computer vision and machine learning techniques. By capturing hand gestures from video input, the system translates these movements into text or speech, enabling effective communication between ISL users and those unfamiliar with the language. Additionally, the system incorporates text-to-speech functionality, ensuring a seamless and humanized interaction experience. The proposed model utilizes Convolutional Neural Networks (CNNs) for image processing and gesture recognition, trained on a comprehensive dataset of ISL gestures. The framework employs preprocessing, feature extraction, and classification algorithms to accurately identify static and dynamic gestures. The system is designed to focus on the nuances of ISL, providing accurate recognition of gestures in real time while offering multilingual support. This initiative aspires to create an inclusive environment by empowering the hearing-impaired community and promoting better integration within society. By using cost-effective techniques, the project ensures scalability and practicality for everyday applications, making communication more efficient and inclusive. Keywords: Indian Sign Language (ISL), Gesture Recognition, Convolutional Neural Networks (CNNs), Real-time Communication
Read full abstract