Abstract

Multisensory integration is the perceptual process by which the user of a Head-Mounted Display (HMD) combines, into a single object, vision from the HMD with concurrent auditory signals. Because HMD users are usually mobile, visual and auditory information may not always be spatially congruent, yet congruence is a requirement for multisensory integration to occur. Previous research has shown that multisensory integration was less effective when the user was walking and sound was delivered via a speaker in a fixed location. In Experiment 1, we showed that people integrate information less effectively when they hear sound from a speaker while they walk rather than sit, because they experience a combination of sound motion and background motion, not because of any workload associated with walking. In Experiment 2, in which participants' multisensory integration performance did not rely on working memory, their performance is worse when they walk rather than sit when hearing sound with the earpiece, rather than in free-field. These mixed results highlight the difficulty in replicating multisensory integration research in applied contexts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call