Abstract

This research presents the INVys system aiming to solve the problem of indoor navigation for persons with visual impairment by leveraging the capabilities of an RGB-D camera. The system utilizes the depth information provided by the camera for micronavigation, which involves sensing and avoiding obstacles in the immediate environment. The INVys system proposes a novel auto-adaptive double thresholding (AADT) method to detect obstacles, calculate their distance, and provide feedback to the user to avoid them. AADT has been evaluated and compared to baseline and auto-adaptive thresholding (AAT) methods using four criteria: accuracy, precision, robustness, and execution time. The results indicate that AADT excels in accuracy, precision, and robustness, making it a suitable method for obstacle detection and avoidance in the context of indoor navigation for persons with visual impairment. In addition to micronavigation, the INVys system utilizes the color information provided by the camera for macro-navigation, which involves recognizing and following navigational markers called optical glyphs. The system uses an automatic glyph binarization method to recognize the glyphs and evaluates them using two criteria: accuracy and execution time. The results indicate that the proposed method is accurate and efficient in recognizing the optical glyphs, making it suitable for use as a navigational marker in indoor environments. Furthermore, the study also provides a correlation between the size of the glyphs, the distance of the recognized glyphs, the tilt condition of the recognized glyphs, and the accuracy of glyph recognition. These correlations define the minimum glyph size that can be practically used for indoor navigation for persons with visual impairment. Overall, this research presents a promising solution for indoor navigation for persons with visual impairment by leveraging the capabilities of an RGB-D camera and proposing novel methods for obstacle detection and avoidance and for recognizing navigational markers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call