Abstract

Wearable auxiliary devices for visually impaired people are highly attractive research topics. Although many proposed wearable navigation devices can assist visually impaired people in obstacle avoidance and navigation, these devices cannot feedback detailed information about the obstacles or help the visually impaired understand the environment. In this paper, we proposed a wearable navigation device for the visually impaired by integrating the semantic visual SLAM (Simultaneous Localization And Mapping) and the newly launched powerful mobile computing platform. This system uses an Image-Depth (RGB-D) camera based on structured light as the sensor, as the control center. We also focused on the technology that combines SLAM technology with the extraction of semantic information from the environment. It ensures that the computing platform understands the surrounding environment in real-time and can feed it back to the visually impaired in the form of voice broadcast. Finally, we tested the performance of the proposed semantic visual SLAM system on this device. The results indicate that the system can run in real-time on a wearable navigation device with sufficient accuracy.

Highlights

  • Accepted: 17 February 2021It is an important issue in social welfare to help visually impaired people live and travel

  • The SLAM system is the fundamental part of the Semantic visual SLAM system, which is positively correlated with the accuracy and final performance of the wearable navigation devices (WNDs) system

  • We have done some experimental validation of our system, especially the performance of executing speed, the experiments of Global Map, the accuracy of SLAM trajectory based on the TUM database, and the performance of Semantic Segment Network

Read more

Summary

A Wearable Navigation Device for Visually Impaired People

Zhuo Chen 1 , Xiaoming Liu 1, * , Masaru Kojima 2 , Qiang Huang 1 and Tatsuo Arai 1,3. Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, State Key Laboratory of Intelligent. Global Alliance Laboratory, The University of Electro-Communications, Tokyo 182-8585, Japan

Introduction
Real-Time Semantic Visual SLAM
Real-Time SLAM System
Framework of Semantic Visual SLAM
The framework of the proposed proposed semantic semantic visual visual SLAM
Semantic Segmentation
Probabilistic Data Association
Probabilistic
Experiments and Results
Experimental Platform Setup
Performance Evaluation of the Real-Time SLAM
Performance
Result of Semantic
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.