Abstract

In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications.

Highlights

  • According to the World Health Organization, around 253 million people live with vision impairments in the world [1]

  • A comparative wayfinding experiment was conducted in an indoor testing area and the results showed that their sonification method is more usable than

  • We find that as the degree of image processing deepens, the higher the level of information extracted from images, the smaller the cognitive burdens will be imposed on the brain of Visually Impaired People (VIP), but at the same time, the lost environment details caused by image processing make it difficult for VIP to reconstruct the scene in their mind

Read more

Summary

Introduction

According to the World Health Organization, around 253 million people live with vision impairments in the world [1]. Impaired People (VIP) meet various difficulties when they travel in unfamiliar environment due to their visual impairments. In the late twentieth century, with the development of semiconductor sensors and portable computers, a broad range of Electronic Travel. Aids (ETAs) were proposed to help VIP perceive environments and avoid obstacles [1]. ETAs usually use ultrasonic sensors to detect obstacles and remind VIP through vibration or beeps [2,3]. Due to the low spatial resolution, ultrasonic sensors can only acquire limited information in every single measurement, which is insufficient for VIP to perceive environments in real time. In the past few years, we have witnessed the rapid development of RGB -Depth (RGB-D) cameras and the Sensors 2020, 20, 3222; doi:10.3390/s20113222 www.mdpi.com/journal/sensors

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call