Abstract

As we move in space, our retinae receive motion signals from two causes: those resulting from motion in the world and those resulting from self-motion. Mounting evidence has shown that vestibular self-motion signals interact with visual motion processing profoundly. However, most contemporary methods arguably lack portability and generality and are incapable of providing measurements during locomotion. Here we developed a virtual reality approach, combining a three-space sensor with a head-mounted display, to quantitatively manipulate the causality between retinal motion and head rotations in the yaw plane. Using this system, we explored how self-motion affected visual motion perception, particularly the motion aftereffect (MAE). Subjects watched gratings presented on a head-mounted display. The gratings drifted at the same velocity as head rotations, with the drifting direction being identical, opposite, or perpendicular to the direction of head rotations. We found that MAE lasted a significantly shorter time when subjects' heads rotated than when their heads were kept still. This effect was present regardless of the drifting direction of the gratings, and was also observed during passive head rotations. These findings suggest that the adaptation to retinal motion is suppressed by head rotations. Because the suppression was also found during passive head movements, it should result from visual-vestibular interaction rather than from efference copy signals. Such visual-vestibular interaction is more flexible than has previously been thought, since the suppression could be observed even when the retinal motion direction was perpendicular to head rotations. Our work suggests that a virtual reality approach can be applied to various studies of multisensory integration and interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call