Abstract

Abstract: All over the world, the hearing and speaking impaired person communicate through the sign language as it is the only reliable source of interaction with themselves and as well as with normal people. This language is mainly made up ofthe shape of the hand movement and gestures it made. This system aims to bridge this communication gap and aid the deaf and the mute to use technology to carryout their daily transactions by using a simple approach which is easily implementable. Also, to help the hearing and speaking impaired person to communicate with people who do not understand sign language. In this system, the sign language concentrates on the 26 letters of alphabets and some simple phrases like Hello, Good morning, thank you, etc., The proposed system aims to recognize sign language and convert it to text. In this paper a technique is proposed todevelop a system which collects the datasets for the input. Input given to the system is an image of the hand depicting. OpenCV is used as a tool for image processing in the proposed system. The system is trained in a specific which we will discuss in the upcoming sections which predict the result of the input given. This mainly uses the TensorFlow for the prediction analysis and detection analysis

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.