Abstract

The deployment of spatial attention is highly sensitive to stimulus predictability. Despite evidence for strong crossmodal links in spatial attentional systems, it remains to be elucidated how concurrent but divergent predictions for targets in different sensory modalities are integrated. In a series of behavioral studies, we investigated the processing of modality-specific expectancies using a multimodal cueing paradigm in which auditory cues predicted the location of visual or tactile targets with modality-specific cue predictability. The cue predictability for visual and tactile targets was manipulated independently. A Bayesian ideal observer model with a weighting factor was applied to trial-wise individual response speed to investigate how the two probabilistic contexts are integrated. Results showed that the degree of integration depended on the level of predictability and on the divergence of the modality-specific probabilistic contexts (Experiments 1–2). However, when the two probabilistic contexts were matched in their level of predictability and were highly divergent (Experiment 3), higher separate processing was favored, especially when visual targets were processed. These findings suggest that modality-specific predictions are flexibly integrated according to their reliability, supporting the hypothesis of separate modality-specific attentional systems that are however linked to guarantee an efficient deployment of spatial attention across the senses.

Highlights

  • Deploying attention in space is a flexible process that enables us to react efficiently to environmental stimuli

  • We found no differences between experiments in fixation time neither for visual (t(15) = 0.94, p = 0.36) nor for tactile targets (t(15) = 0.97, p = 0.35)

  • We investigated the deployment of spatial attention in response to visual and tactile stimuli throughout multiple behavioral experiments

Read more

Summary

Introduction

Deploying attention in space is a flexible process that enables us to react efficiently to environmental stimuli. In a series of behavioral studies, Spence et al.[8] showed that expectations about the location of a target in one modality (either visual or tactile) modulated expectations in the other modality, arguing for the existence of crossmodal links between vision and touch Their data suggest that there is some degree of integration or merging of concurrent but divergent modality-specific cue predictability. They argue against an unconditional supramodal nature of the effects, but it remains unclear which factors influence the degree of integration In this series of behavioral experiments, we used computational modelling of behavior to investigate if and under which circumstances crossmodal links during the deployment of spatial attention occur. We hypothesized that in this case participants would use a more separated processing of the two modality-specific probabilistic contexts

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call