Abstract
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.
Highlights
In this paper, the main purpose is focused on navigation assistance for visually-impaired people in terrain awareness, a technical term that was originally coined for commercial aircraft
In a previous work [41], we addressed water puddles’ detection beyond traversability with a polarized RGB-Depth (pRGB-D) sensor and generated stereo sound feedback to guide the visually-impaired to follow the prioritized direction for hazard avoidance
We proposed ERFNet [64,65], which aimed at maximizing the trade-off between accuracy/efficiency and making Convolutional Neural Networks (CNNs)-based segmentation suitable for applications on current embedded hardware platforms
Summary
The main purpose is focused on navigation assistance for visually-impaired people in terrain awareness, a technical term that was originally coined for commercial aircraft. Sensors 2018, 18, 1506 improvement of Computer Vision (CV) has been an enormous benefit for the Visually-Impaired (VI), allowing individuals with blindness or visual impairments to access, understand and explore surrounding environments [3,5,6] These trends have accelerated the proliferation of monocular detectors and cost-effective RGB-Depth (RGB-D) sensors [5], supposing essential prerequisites to aid perception and navigation in visually-impaired individuals by leveraging robotic vision [7]. A broad variety of navigational assistive technologies have been developed to accomplish specific goals including avoiding obstacles [8,9,10,11,12,13,14,15,16,17], finding paths [18,19,20,21,22,23,24,25,26,27,28,29], locating sidewalks [30,31,32,33], ascending stairs [34,35,36,37,38] or descending steps [39,40] and negotiating water hazards [41]
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have