Abstract
Over the last decades, the development of navigation devices capable of guiding the blind through indoor and/or outdoor scenarios has remained a challenge. In this context, this paper’s objective is to provide an updated, holistic view of this research, in order to enable developers to exploit the different aspects of its multidisciplinary nature. To that end, previous solutions will be briefly described and analyzed from a historical perspective, from the first “Electronic Travel Aids” and early research on sensory substitution or indoor/outdoor positioning, to recent systems based on artificial vision. Thereafter, user-centered design fundamentals are addressed, including the main points of criticism of previous approaches. Finally, several technological achievements are highlighted as they could underpin future feasible designs. In line with this, smartphones and wearables with built-in cameras will then be indicated as potentially feasible options with which to support state-of-art computer vision solutions, thus allowing for both the positioning and monitoring of the user’s surrounding area. These functionalities could then be further boosted by means of remote resources, leading to cloud computing schemas or even remote sensing via urban infrastructure.
Highlights
Recent studies on global health estimate that 217 million people suffer from visual impairment, and 36 million from blindness [1]
It is important to take into consideration that some of these classic approaches and their impact on the targeted public could even be applicable to current device design
Over the last 70 years, researchers have worked on various prototypes of electrical obstacle detection devices for blind and visually impaired (BVI) people known as electronic travel aids (ETA)
Summary
Recent studies on global health estimate that 217 million people suffer from visual impairment, and 36 million from blindness [1]. While the capability to detect and avoid nearby obstacles relates to “mobility.” A lack of vision heavily hampers the performance of such tasks, requiring a conscious effort to integrate perceptions from the remaining sensory modalities, memories, or even verbal descriptions Past work described this as a “cognitive collage” [2]. This is important in the field of non-visual human-machine interface, as the perceptual and cognitive processes remain the same.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.