Abstract

Moving hands in air and watching your screen writes for you and at the same time making different Signs with hands and the system displays it as well as speak what Sign you are making.This seems to be very futuristic approach towards the realm of Image Processing and Gesture Recognition . In this paper, we present a very interesting and a novel approach towards an interactive learning platform where one can draw the content on screen while moving their hand in air and can also use hand sign language to communicate with an ease with the Hearing Impaired and Dumb Community. Our system combines both technologies to create a smooth and engaging experience for users. It can be used in interactive art setups or virtual reality setups. Air canvas enables users to draw and manipulate digital content in mid-air with object tracking using Computer Vision and Mediapipe framework, while hand gesture recognition allows for real-time interpretation of Hand Signs to perform actions or commands within the system. This Model not only recognizes the sign but also speaks it loud using pyttsx3 a text-to-speech conversion Library, ensuring a good communication between a normal human and people with Non-Verbal and Hearing Impaired disability. To enhance the performance of the model We validate the model with a real dataset trained by us. This training was essential for refining the accuracy and efficiency of the model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call