Abstract

The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory–motor interactions, and on possible optimization strategies underlying echolocation in humans.

Highlights

  • Some animals, like bats and toothed whales, are known to use echolocation for orientation and navigation purposes

  • The results showed that subjects did not use the full range of speeds available in the control experiment, but employed roughly the same average and top speed as in Experiment 2.1

  • There were no significant differences in performance between the Experiment 2.1 and the control experiment

Read more

Summary

Introduction

Like bats and toothed whales, are known to use echolocation for orientation and navigation purposes. They actively emit precisely timed acoustic signals and analyse the resulting echoes to extract spatial information about their environments. This allows them to compensate for a lack of visual stimuli due to nocturnal darkness or murky waters in their habitats [1,2]. Some blind humans use echoes from selfgenerated sounds to represent their spatial environment with high precision (for reviews, see [3,4]).

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.