Abstract

Directing attention to a specific feature of an object has been linked to different forms of attentional modulation. Object-based attention theory founds on the finding that even task-irrelevant features at the selected object are subject to attentional modulation, while feature-based attention theory proposes a global processing benefit for the selected feature even at other objects. Most studies investigated either the one or the other form of attention, leaving open the possibility that both object- and feature-specific attentional effects do occur at the same time and may just represent two sides of a single attention system. We here investigate this issue by testing attentional spreading within and across objects, using reaction time (RT) measurements to changes of attended and unattended features on both attended and unattended objects. We asked subjects to report color and speed changes occurring on one of two overlapping random dot patterns (RDPs), presented at the center of gaze. The key property of the stimulation was that only one of the features (e.g., motion direction) was unique for each object, whereas the other feature (e.g., color) was shared by both. The results of two experiments show that co-selection of unattended features even occurs when those features have no means for selecting the object. At the same time, they demonstrate that this processing benefit is not restricted to the selected object but spreads to the task-irrelevant one. We conceptualize these findings by a 3-step model of attention that assumes a task-dependent top-down gain, object-specific feature selection based on task- and binding characteristics, and a global feature-specific processing enhancement. The model allows for the unification of a vast amount of experimental results into a single model, and makes various experimentally testable predictions for the interaction of object- and feature-specific processes.

Highlights

  • The term attention is widely used to paraphrase specific modulations in the representation of task-relevant sensory information

  • Considering the relevant behavioral measure of this study, we found very similar reaction times (RTs) across blocks for both correctly cued speed and color changes in both experiments, with no significant difference between blocks (Experiment 1: speed: F(8,63) = 0.24, p = 0.981, color: F(8,63) = 0.85, p = 0.563; Experiment 2: speed: F(8,63) = 1.1, p = 0.378, color: F(8,63) = 0.8, p =0.609)

  • The significant dependencies between the information provided by the cue and the respective RT distributions were interpreted as representing feature- and object-based attentional selection, and were integrated into a 3-step model of attention acting on the early processing of visual stimuli

Read more

Summary

Introduction

The term attention is widely used to paraphrase specific modulations in the representation of task-relevant sensory information. While it suggests the assumption of a homogenous process, attention research has revealed many different aspects of attentional modulation, both in terms of neuronal mechanisms and behavior, and not all of these results turned out to be compatible. Directing attention to the motion of a stimulus, in terms of direction and speed, locally increases the firing rate (Treue and Maunsell, 1996) and the gamma power of the local field potential (Khayat et al, 2010) of neurons in motion-sensitive mediotemporal (MT) area, causes shrinkage of receptive fields around the attended stimulus (Womelsdorf et al, 2006a), and increases stimulus selectivity of single neurons (Wegener et al, 2004). Corresponding findings have been obtained in other visual areas for features like color and form (McAdams and Maunsell, 1999; Reynolds et al, 1999; Fries et al, 2001; Taylor et al, 2005; Sundberg et al, 2012)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call