Abstract

Vision is the principal source of information of the surrounding world. It facilitates our movement and development of everyday activities. In this sense, blind people have great difficulty for moving, especially in unknown environments, which reduces their autonomy and puts them at risk of suffering an accident. Electronic Travel Aids (ETAs) have emerged and provided outstanding navigation assistance for blind people. In this work, we present the methodology followed for implementing a stereo vision-based system that assists blind people to wander unknown environments in a safe way, by sensing the world, segmenting the floor in 3D, fusing local 2D grids considering the camera tracking, creating a global occupancy 2D grid, reacting to close obstacles, and generating vibration patterns with an haptic belt. For segmenting the floor in 3D, we evaluate normal vectors and orientation of the camera obtained from depth and inertial data, respectively. Next, we apply RANSAC for computing efficiently the equation of the supporting plane (floor). The local grids are fused, obtaining a global map with data of free and occupied areas along the whole trajectory. For parallel processing of dense data, we leverage the capacity of the Jetson TX2, achieving high performance, low power consumption, and portability. Finally, we present experimental results obtained with ten (10) participants, in different conditions, with obstacles of different height, hanging obstacles, and dynamic obstacles. These results show high performance and acceptance by the participants, highlighting the easiness to follow instructions and the short period of training.

Highlights

  • Electronic Travel Aids (ETAs) for assisting blind people, especially vision-based aids, have taken an approach based on autonomous vehicles since both have to deal with similar challenges, such as real-time performance, handle previously unseen environments, be robust to different conditions and dynamic environments, and be safe for the user, for the people, and for objects around it

  • Note that the average accuracy in haptic perception is 90.75%, which is appropriate for guiding blind people to walkable space in an unknown environment

  • We have shown that transferring technology from autonomous cars to the field of assistive tools for blind people is feasible due to (1) both have similar requirements such as real-time performance, work in unknown environments, robust to changing environments, and safe and (2) the increase in accuracy and portability of 3D vision sensors and in the computing power and portability of embedded processors

Read more

Summary

Introduction

Electronic Travel Aids (ETAs) for assisting blind people, especially vision-based aids, have taken an approach based on autonomous vehicles since both have to deal with similar challenges, such as real-time performance, handle previously unseen environments, be robust to different conditions and dynamic environments, and be safe for the user, for the people, and for objects around it. In this sense, algorithms in the field of autonomous vehicles can be used for assisting blind people in navigation tasks, such as scene understanding, object detection, segmentation, path planning, localization, and mapping.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call