Abstract
Multimodal perception is a key factor in obtaining a rich and meaningful representation of the world. However, how each stimulus combines to determine the overall percept remains a matter of research. The present work investigates the effect of sound on the bimodal perception of motion. A visual moving target was presented to the participants, associated with a concurrent sound, in a time reproduction task. Particular attention was paid to the structure of both the auditory and the visual stimuli. Four different laws of motion were tested for the visual motion, one of which is biological. Nine different sound profiles were tested, from an easier constant sound to more variable and complex pitch profiles, always presented synchronously with motion. Participants’ responses show that constant sounds produce the worst duration estimation performance, even worse than the silent condition; more complex sounds, instead, guarantee significantly better performance. The structure of the visual stimulus and that of the auditory stimulus appear to condition the performance independently. Biological motion provides the best performance, while the motion featured by a constant-velocity profile provides the worst performance. Results clearly show that a concurrent sound influences the unified perception of motion; the type and magnitude of the bias depends on the structure of the sound stimulus. Contrary to expectations, the best performance is not generated by the simplest stimuli, but rather by more complex stimuli that are richer in information.
Highlights
Multi-model perception is a crucial part of everyday life
This is notably the case when we look at an object producing sound and moving in the environment, like a car during a car race, for instance, where its sound is likely to contribute to the tracking of the visual target
It is a common experience that the coherence –or incoherence– among the incoming stimuli could originate, in specific conditions, an overall perception that does not correspond to the physical reality [for instance: in the ventriloquism effect (Slutsky and Recanzone, 2001; Morein-Zamir et al, 2003); in spatial or temporal illusions (Mateeff et al, 1985; Kitajima and Yamashita, 1999; Shams et al, 2002; Watkins et al, 2006), or alteration of other attributes (Kitagawa and Ichihara, 2002; Shams et al, 2002, 2005)]
Summary
Multi-model perception is a crucial part of everyday life This is notably the case when we look at an object producing sound and moving in the environment, like a car during a car race, for instance, where its sound is likely to contribute to the tracking of the visual target. It seems easier to estimate how long was the solo of, let’s say, a saxophone player if we were both listening to the sounds and watching his/her movements. Both cases are an example of multi-modal perception, where visual and acoustic information are simultaneous. To date, how the pieces of information combine remains a matter of debate
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have