Abstract

The visually impaired personnel face a variety problem in their day to day life. They come under very challenging situation on a daily basis. Walking with a stick and confusion in mind due to the trial and error method making it very hard to navigate in unfamiliar locations. With the advent of technology in this data driven world, the barrier for blind society can be cut down with innovative adaptations in technologies such as machine learning, deep learning and computer vision.
 QR codes can be used to inform some familiarized locations giving quick response in a closed environment for the blind person. This makes it is possible to help navigate the blind person in a closed environment.
 The project attempts to help the blind people to get comfortable and be confident in unfamiliar locations through a speech assistive system. The projects consist of 3 modules namely:(i) Object Detection and positioning.(ii) Text to Speech Conversion.
 The project helps the user to find obstacles in his path and avoid them, it also makes it possible to locates some of the locations in a closed environment, this is done with the help of CNNs and some APIs like Opencv, Yolo and Tensorflow, QRCode, pyzbar (used for generation of qrcode). The detected items are reported to the user in the form of speech with the help of some python libraries like pyttsx3.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call