Abstract

Aphonia, a condition resulting in the loss of voice, presents significant challenges in interpersonal interactions. This project proposes a dual-pronged approach involving hand gesture recognition and voice conversion techniques to facilitate effective communication for aphonic individuals. The integration of real-time hand gesture recognition provides an alternative means of expressing ideas and emotions. By capturing and translating hand gestures into textual or auditory output, this approach offers a versatile mode of communication. Additionally, advanced voice conversion algorithms are employed to synthesize natural and intelligible speech from typed or selected text. This innovative coupling of technologies empowers aphonic individuals to engage in fluid conversations, fostering improved social interactions and enhancing their overall quality of life. A webcam is used to communicate with deaf and aphonic people. When there are modalities of communication, such as speech, that are unavailable, the human hand is the preferred option. Hand gestures that transmit concepts utilizing diverse forms and finger alignment enable human-machine interaction. The purpose of this work is to develop a hand gesture detection model and translate the results to text and audio formats. The model also responds to user voice commands and displays hand signs from the database.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.