Some individuals possess the innate ability to engage in communication with one another. However, regrettably, not everyone possesses the capacity for speech and hearing. Nonverbal communication, employed within the community of individuals with diverse abilities, serves as a method of interaction. To convey meaning, sign language combines simultaneous combinations of hand forms and movements with body, arm, and hand gestures as well as facial expressions. Those who are unable to speak use sign language to communicate with other people who are also speech-impaired as well as with average people who are familiar with the subtleties of sign language. Sometimes, an interpreter is necessary to translate the intentions behind the signs for those who can speak but may not fully understand sign language. However, consistent access to interpreters for sign language communication is not always feasible, and not everyone can master the intricacies of sign language. Hence, an alternative approach is the implementation of a device known as the Gesture Vocalizer. The Gesture Vocalizer functions as an intermediary. It translates the gestures produced by the person with speech impairment into audio output. The hardware iteration of this Gesture Vocalizer project is an array of microcontroller systems designed to enable inter-system communication between mute, deaf, and visually impaired communities and those who do not face such communication challenges. In this effort, we proposed a gesture vocalizer that is based on sensors and a microcontroller. The design of the gesture vocalizer primarily revolves around a hand glove integrated with a microcontroller system.
Read full abstract