Abstract

For deaf-mute people all around the world, Gestures are the main mode of communication for DeafMute people worldwide. In this gesture-based voice system, machine learning presents a real-time vision-based system for monitoring hand finger gestures. It was developed in Python using Raspberry Pi along with a camera module, which is compatible with the Open CV library for computer vision. The Raspberry Pi includes an image processing technique that monitors the fingers of the hand using the extracted attributes. The main purpose of a gesture based speaking system is to develop control communication between humans and computers. This leads to a system that can recognize and monitor known objects and has surveillance and application capabilities. The major goal of the suggested work was to allow the system to function properly. The main objective is to enable the system to recognize and monitor certain properties of objects specified by the Raspberry Pi along with camera module using an appropriate image processing technique. The Open CV library's feature extraction technique of the Open CV library for Python programming runs on Raspberry Pi using an external camera. A gesture based speaking system using machine learning provides a new, intuitive, and simple way to communicate with computers that are more human-like.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call