Abstract

Navigation assistance is an active research area, where one aim is to foster independent living for people with vision impairments. Despite the fact that many navigation assistants use advanced technologies and methods, we found that they did not explicitly address two essential requirements in a navigation assistant - portability and convenience. It is equally imperative in designing a navigation assistant for the visually impaired that the device is portable and convenient to use without much training. Some navigation assistants do not provide users with detailed information about the obstacle types that can be detected, which is essential to make informed decisions when navigating in real-time. To address these gaps, we propose DeepNAVI, a smartphone-based navigation assistant that leverages deep learning competence. Besides providing information about the type of obstacles present, our system can also provide information about their position, distance from the user, motion status, and scene information. All this information is offered to users through audio mode without compromising portability and convenience. With a small model size and rapid inference time, our navigation assistant can be deployed on a portable device such as a smartphone and work seamlessly in a real-time environment. We conducted a pilot test with a user to assess the usefulness and practicality of the system. Our testing results indicate that our system has the potential to be a practical and useful navigation assistant for the visually impaired.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call