Abstract

The ability of humans to adapt to inter-modal discrepancies is an important factor in the design of virtual environments. In the present study, azimuthal localization cues were altered (to magnify interaural differences) relative to real proprioceptive, visual, and vestibular cues. Subjects were alternately tested and trained in hybrid real/virtual environments where auditory stimuli were synthesized (using a PC, Convolvotron, and electromagnetic head tracker) to be a 1 of 13 discrete positions marked by real lights. Testing consisted of identifying the azimuth of virtual sound sources without correct answer feedback or significant head motion. Preliminary findings on resolution and bias for a variety of different training procedures as well as a number of different transformations of the localization cues will be overviewed. [Work supported by AFOSR, Grant No. 90-200.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.