Abstract

I investigate essential neuronal mechanisms of visual attention based on object-based theory and a biased-competition scheme. A neural network model is proposed that consists of two feature networks, FI and FII, and one object network, OJ. The FI and FII networks send feedforward projections to the OJ network and receive feedback projections from the OJ network in a convergent/divergent manner. The OJ network integrates information about sensory features originated from the FI and FII networks into information about objects. I let the feature networks and the object network memorize individual features and objects according to the Hebbian learning rule and create the point attractors corresponding to these features and objects as long-term memories in the network dynamics. When the model tries to attend to objects that are superimposed, the point attractors relevant to the two objects emerge in each network. After a short interval (hundreds of milliseconds), the point attractors relevant to one of the two objects are selected and the other point attractors are completely suppressed. I suggest that coherent interactions of dynamical attractors relevant to the selected object may be the neuronal substrate for object-based selective attention. Bottom-up (FI-to-OJ and FI-to-OJ) neuronal mechanisms separate candidate objects from the background, and top-down (OJ-to-FI and OJ-to-FII) mechanisms resolve object-competition by which one relevant object is selected from candidate objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call