Event Abstract Back to Event Retinotopic remapping through gain-modulated neural fields Sebastian Schneegans1* and Gregor Schöner1 1 Ruhr-Universität Bochum, Germany During visual exploration humans continuously make saccades, rapid eye movements that shift the center of gaze to locations of interest, thereby completely altering the retinal image. Two explanations have been proposed to account for the perception of visual stability and for robust spatial memory despite intervening saccades: According to the first one, a representation of the environment in an eye-centered reference frame is remapped whenever a saccade occurs. This hypothesis is supported by findings of neurons in macaque parietal and frontal cortex whose receptive fields appear to shift transiently in accordance with the metrics of an impending saccade, starting before the actual eye movement is executed [1]. The second hypothesis states that a more gaze-invariant representation (head- or body-centered) is constructed from the eye-centered input, facilitating trans-saccadic integration and memory. This reference frame transformation is hypothesized to be based on so-called gain-modulated neurons found in posterior parietal cortex, which have visual receptive fields in eye-centered coordinates, but whose overall response strength is modulated by current eye position [2]. We show how these two hypotheses can be combined in a single mechanism. As a framework we use dynamic neural fields, which provide a biologically plausible architecture to model neural activation patterns at population level [3]. The key element of the mechanism is a high-dimensional transformation field, which receives eye position and retinotopic visual input and projects to a head-centered map. The units of this field show the same overall response pattern as the gain-modulated neurons, and the architecture is analogous to previous implementations of the second hypothesis [4]. Unlike previous approaches, we aim to capture the time course of neural activation patterns under changing inputs, taking into account feedforward and feedback connections between the different maps. This connectivity produces distributed representations of perceptual and memory items, stabilized by mutual excitation between the transformation field and the head-centered map. Crucially, a retinotopic remapping as claimed by the first hypothesis emerges directly from this architecture: Whenever the eye position changes, the combined inputs to the transformation field lead to a shift of activity with respect to the eye-centered reference frame. To make this remapping predictive, the representation of eye position itself is updated by a corollary discharge signal (which has been shown to be a prerequisite for the remapping [5]) using a second, analogous mechanism.
Read full abstract