Abstract

Humans constantly process and integrate sensory input from multiple sensory modalities. However, the amount of input that can be processed is constrained by limited attentional resources. A matter of ongoing debate is whether attentional resources are shared across sensory modalities, and whether multisensory integration is dependent on attentional resources. Previous research suggested that the distribution of attentional resources across sensory modalities depends on the the type of tasks. Here, we tested a novel task combination in a dual task paradigm: Participants performed a self-terminated visual search task and a localization task in either separate sensory modalities (i.e., haptics and vision) or both within the visual modality. Tasks considerably interfered. However, participants performed the visual search task faster when the localization task was performed in the tactile modality in comparison to performing both tasks within the visual modality. This finding indicates that tasks performed in separate sensory modalities rely in part on distinct attentional resources. Nevertheless, participants integrated visuotactile information optimally in the localization task even when attentional resources were diverted to the visual search task. Overall, our findings suggest that visual search and tactile localization partly rely on distinct attentional resources, and that optimal visuotactile integration is not dependent on attentional resources.

Highlights

  • In daily life, humans face tasks that are effortful and resource demanding such as looking for a person in a crowd or focusing to the sound of a person’s voice in a noisy environment

  • The goal of the present study is to investigate whether the attentional resources required by a visual search task and a tactile localization task are shared or distinct

  • We found in all cases that participants deviated their gaze less than one visual degree from the center [visual search alone: M = 0.65◦, t(11) = −11.71, corrected p < 0.001, visual search in combination with visual localization: M = 0.57◦, t(11) = −8.76, corrected p < 0.001; visual search in combination with tactile localization M = 0.65◦, t(11) = −7.13, corrected p < 0.001; visual search in combination with visuotactile localization: M = 0.62◦, t(11) = −7.07, corrected p < 0.001], indicating that the visual search task did not require participants to deviate more than one visual degree from the center of the screen

Read more

Summary

Introduction

Humans face tasks that are effortful and resource demanding such as looking for a person in a crowd or focusing to the sound of a person’s voice in a noisy environment In such tasks humans constantly process and integrate sensory input from multiple sensory modalities. When performing a spatial task in the visual modality and a discrimination task in the auditory modality, evidence for distinct attentional resources for the visual and auditory modality have been found (Arrighi et al, 2011). When performing a spatial task in the visual modality and another spatial task in the auditory modality, shared attentional resources for the visual and auditory modality have been found (Wahn and König, 2015a). What is more, when performing a spatial task and a discrimination task in separate sensory modalities, attentional resources are drawn from distinct pools of resources as well

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call