Abstract

Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.

Highlights

  • Motion prediction is a critical ability for many species, e.g., when catching prey or avoiding being caught by a predator

  • Previous studies on motion prediction predominantly focused on different tasks and the visual modality (e.g., DeLucia, 2004; Lugtigheid and Welchman, 2011; Landwehr et al, 2013)

  • Several previous studies had reported significant influences of individual differences on various multisensory phenomena including point of subjective simultaneity (Eg and Behne, 2015), temporal order judgement (Grabot and van Wassenhove, 2017), intersensory facilitation (Hagmann and Russo, 2016), and McGurk effect (Mallick et al, 2015; Ipser et al, 2017). Our findings extend these observations and demonstrate the influence of individual differences on audiovisual motion prediction

Read more

Summary

Introduction

Motion prediction is a critical ability for many species, e.g., when catching prey or avoiding being caught by a predator. Research recently started to focus on multisensory interactions in motion perception (though often with simple stimuli, Hofbauer et al, 2004; Prime and Harris, 2010). These studies reported that participants react faster when motion is presented in both modalities (Harrison et al, 2010) and showed enhanced perceptual sensitivity for bimodal than unimodal motion signals (Wuerger et al, 2003). A salient motion signal in one modality can bias perception of a stationary or ambiguously moving stimuli in another modality (Hidaka et al, 2009; Teramoto et al, 2010; Alink et al, 2012)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call