Abstract

In this paper, we present a system that allows visually impaired people to autonomously navigate in an unknown indoor and outdoor environment. The system, explicitly designed for low vision people, can be generalized to other users in an easy way. We assume that special landmarks are posed for helping the users in the localization of pre-defined paths. Our novel approach exploits the use of both the inertial sensors and the camera integrated into the smartphone as sensors. Such a navigation system can also provide direction estimates to the tracking system to the users. The success of out approach is proved both through experimental tests performed in controlled indoor environments and in real outdoor installations. A comparison with deep learning methods has been presented.

Highlights

  • The possibility of exploiting Information and Communication Technologies (ICT) for supporting Visually Impaired People (VIP) and promoting vision substitution has been widely considered over the last five decades, especially with the emergence of electronic Braille displays, synthetic speech and ultrasonic sensors

  • We explored the performance improvements of Pedestrian Dead Reckoning (PDR) schemes that can be enabled by the exploitation of suitable Computer Vision (CV) functions, devised to provide additional heading and velocity measurements

  • The main contributions of the paper are the following: i) demonstrating how the camera sensors can be exploited for providing measurements similar to the ones provided by the IMU (Inertial Measurement Unit) systems and characterizing the errors of these measurements; ii) quantifying the complexity of the processing required by the computer vision algorithms and discussing the best trade-offs between energy consumption and measurement availability and comparing it with deep learning solutions; iii) designing a tracking system based on the integration of IMU and camera-based measurements and evaluating its accuracy in experiments with real users

Read more

Summary

INTRODUCTION

The possibility of exploiting Information and Communication Technologies (ICT) for supporting Visually Impaired People (VIP) and promoting vision substitution has been widely considered over the last five decades, especially with the emergence of electronic Braille displays, synthetic speech and ultrasonic sensors. The main contributions of the paper are the following: i) demonstrating how the camera sensors can be exploited for providing measurements similar to the ones provided by the IMU (Inertial Measurement Unit) systems and characterizing the errors of these measurements; ii) quantifying the complexity of the processing required by the computer vision algorithms and discussing the best trade-offs between energy consumption and measurement availability and comparing it with deep learning solutions; iii) designing a tracking system based on the integration of IMU and camera-based measurements and evaluating its accuracy in experiments with real users.

RELATED WORK
COMPUTER VISION ALGORITHMS AS MEASUREMENT SENSORS
IMU BASED DEAD RECKONING
TRACKING SYSTEM WITH DATA FUSION
MEASUREMENT MODEL
Findings
VIII. CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.