Abstract

This paper presented the assessment of cognitive load (as an effective real-time index of task difficulty) and the level of brain activation during an experiment in which eight visually impaired subjects performed two types of tasks while using the white cane and the Sound of Vision assistive device with three types of sensory input—audio, haptic, and multimodal (audio and haptic simultaneously). The first task was to identify object properties and the second to navigate and avoid obstacles in both the virtual environment and real-world settings. The results showed that the haptic stimuli were less intuitive than the audio ones and that the navigation with the Sound of Vision device increased cognitive load and working memory. Visual cortex asymmetry was lower in the case of multimodal stimulation than in the case of separate stimulation (audio or haptic). There was no correlation between visual cortical activity and the number of collisions during navigation, regardless of the type of navigation or sensory input. The visual cortex was activated when using the device, but only for the late-blind users. For all the subjects, the navigation with the Sound of Vision device induced a low negative valence, in contrast with the white cane navigation.

Highlights

  • At the world level, approximately 2.2 billion people have a vision impairment or suffer from blindness, caused primarily by uncorrected refractive errors, cataracts, age-related macular degeneration, and glaucoma

  • We presented a study of cognitive load assessment and brain activation evaluation during an experiment in which eight visually impaired subjects performed various object detection and navigation activities while using both the white cane and the Sound of Vision project (SoV) device, which provided three types of sensory input—audio cues delivered through headphones, haptic cues delivered as vibrations applied on a vest that was placed on the user’s abdomen, and a combination of both audio and haptic information, called the multimodal sensory input

  • This paper presented an experimental framework and a study based on EEG, heart rate (HR), and GSR signal

Read more

Summary

Introduction

Approximately 2.2 billion people have a vision impairment or suffer from blindness, caused primarily by uncorrected refractive errors, cataracts, age-related macular degeneration, and glaucoma. The purpose of the Sound of Vision project (SoV) [2] was to develop an assistive system for the blind and visually impaired users that would facilitate navigation and obstacle detection. We presented a study of cognitive load assessment and brain activation evaluation during an experiment in which eight visually impaired subjects performed various object detection and navigation activities while using both the white cane (a navigation aid they use on a daily basis) and the SoV device, which provided three types of sensory input—audio cues delivered through headphones, haptic cues delivered as vibrations applied on a vest that was placed on the user’s abdomen, and a combination of both audio and haptic information, called the multimodal sensory input.

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call