Abstract

Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

Highlights

  • From all our senses, we continuously receive far more information than can be effectively processed

  • Do Audition and Vision Share Spatial Attentional Resources? Figure 3A shows a descriptive overview of the performance for the multiple object tracking (MOT) task and the LOC task, respectively

  • It can be seen that the amount of interference in each condition is about equal in percentage, indicating that there is only one pool of attentional resources instead of separate attentional resources for each sensory modality

Read more

Summary

Introduction

We continuously receive far more information than can be effectively processed. Audiovisual integration and spatial attentional resources draws from separate pools of attentional resources for each sensory modality and to what extent attentional resources interact with multisensory integration processes. The question of whether attentional limitations are specific to each sensory modality or whether there is a common pool of attentional resources for all sensory modalities is a matter of ongoing debate (for support for distinct attentional resources see: Duncan et al, 1997; Potter et al, 1998; Soto-Faraco and Spence, 2002; Alais et al, 2006; Hein et al, 2006; Talsma et al, 2006; van der Burg et al, 2007; for support for a common pool of resources see: Jolicoeur, 1999; Arnell and Larson, 2002; SotoFaraco et al, 2002; Arnell and Jenkins, 2004). If humans have separate attentional resources for each sensory modality, the total amount of information that can be attended to would be larger if the received information would be distributed across several sensory modalities rather than received only via one sensory modality

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call