Abstract

One of the most common diseases in the world is visual impairment. According to the World Health Organization, one of every four people has a visual impairment, and this statistic is increasing every day depending on the use of technological devices. Nowadays there are a lot of solutions for visually impaired people but most of the solutions are expensive or not useful. In this paper, we proposed a novel system that can help visually impaired people to find an object. The proposed system guides visually impaired people to find an object by using image to speech technique over the smart phone cameras. Voice directions are given to the users. The novelty here is using users hand as a reference object. The system detects the hand of the user, falling in the camera view, and recognizes the other target objects where the camera angle sees, then the system guides the visually impaired person to the location of the target object by image-to-speech technique. This approach is based on calculating target object’s position according to the user’s hand. The positions are calculated as directions by using deep learning and image processing techniques and outcomes are notified to the user by the speech. The system uses Convolutional Neural Network (CNN) for object detection. The model is based on the Single Shot MultiBox Detector (SSD) approach. SSD approach has higher accuracy than You Only Look Once (YOLO) and a higher frame per second (fps) than Fast R-CNN, Faster R-CNN or R-CNN for object detection on smartphones. The TensorFlow-Lite model we use is based on SSD and was trained on Common Objects in Context (COCO) dataset which has ninety-one object classes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.