Abstract
Assistive devices for visually impaired people (VIP) which support daily traveling and improve social inclusion are developing fast. Most of them try to solve the problem of navigation or obstacle avoidance, and other works focus on helping VIP to recognize their surrounding objects. However, very few of them couple both capabilities (i.e., navigation and recognition). Aiming at the above needs, this paper presents a wearable assistive device that allows VIP to (i) navigate safely and quickly in unfamiliar environment, and (ii) to recognize the objects in both indoor and outdoor environments. The device consists of a consumer Red, Green, Blue and Depth (RGB-D) camera and an Inertial Measurement Unit (IMU), which are mounted on a pair of eyeglasses, and a smartphone. The device leverages the ground height continuity among adjacent image frames to segment the ground accurately and rapidly, and then search the moving direction according to the ground. A lightweight Convolutional Neural Network (CNN)-based object recognition system is developed and deployed on the smartphone to increase the perception ability of VIP and promote the navigation system. It can provide the semantic information of surroundings, such as the categories, locations, and orientations of objects. Human–machine interaction is performed through audio module (a beeping sound for obstacle alert, speech recognition for understanding the user commands, and speech synthesis for expressing semantic information of surroundings). We evaluated the performance of the proposed system through many experiments conducted in both indoor and outdoor scenarios, demonstrating the efficiency and safety of the proposed assistive system.
Highlights
According to the global statistics of the World Health Organization (WHO), 188.5 million people have mild vision impairment, 217 million people have moderate to severe vision impairment, and 36 million people are blind [1]
We first test the ground segmentation performance since it plays an important role in the whole system
All the people who participated in this experiment approve that the results can be published with anonymity
Summary
According to the global statistics of the World Health Organization (WHO), 188.5 million people have mild vision impairment, 217 million people have moderate to severe vision impairment, and 36 million people are blind [1]. Vision impairment has a significant impact on lives, including the ability to navigate and recognize the environment independently. Aside from achievement of medicine, neuroscience, and biotechnologies to find an ultimate solution to vision impairment problems [2], electronic and computer technologies can provide assistive tools to improve their quality of life and allow better integration into society. To satisfy the above needs, many electronic assistive devices [4,5,6,7] have been proposed in recent years. These designs can be classified into two categories from an overall perspective. Integrating recognition and navigation capabilities into a single system can dramatically improve the VIP’s daily traveling
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.