Abstract

Good vision is an expensive gift but now a day’s loss of vision is becoming common issue. Blind or visually impaired people does not have any conscious about the danger they are facing in their daily life. To help the blind people the visual world has to be transformed into the audio world with the potential to inform them about objects. Various challenges are faced by visually impaired patients even in the familiar environment. Visually impaired individuals are at drawback due to lack of sufficient information about their familiar environment. This project employs a Convolution Neural Network for recognition of pre-trained objects. This project employs in deep learning a deep Neural Network (DNN) for recognizing the object which is captured from the real world. The captured image is compared with some pre trained objects that is stored in dataset .The comparative of the object is based on the shape and size of an objects. In deep neural network, TensorFlow package using a model called Mobile Net SSD that is comparing the real time capture image with pre trained object based on shape, size of the object. If the image is matched with that trained object, it will display the name of the object. Then the name of the object is converted into audio output with the help of gTTS. This will helps to identify and detect what object is present in front of blind people and give output as audio.

Highlights

  • Scholars tend to combine the window-sliding technique with the classifier to detect regions of an image at all locations and scales that contain the given objects

  • We have proposed a novel object detection method for indoor signage recognition

  • We will develop a prototype system of indoor signage detection and address the significant human interface issues associated with way finding for blind users

Read more

Summary

Introduction

Scholars tend to combine the window-sliding technique with the classifier to detect regions of an image at all locations and scales that contain the given objects. We propose a new method to detect indoor signage which first employs the saliency map [39][40] to extract attended areas and applies bipartite graphic matching [41][42][43] to recognize indoor signage only at the attended areas instead of the whole image, which can increase the accuracy and reduce the computation cost. We propose a new method to detect indoor signage by combining saliency map based attended area extraction and bipartite graph matching based signage recognition. We propose a computer vision-based method for restroom signage detection and recognition. We propose a computer vision-based method to detect stair-cases and pedestrian crosswalks by using a commodity RGBD camera. We estimate the distance between the camera and stairs for the blind users

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.