In neural integrator networks, transient inputs are accumulated into sustained output signals that reflect the mathematical integral, over time, of their inputs. This computation has been identified as an important component of a wide variety of brain functions ranging from accumulation of sensory evidence for decision making to the motor control of eye movements. All current network models of neural integration assume that the conversion of transient inputs to sustained responses is accomplished by feedback among recurrently connected neuronal elements. Here we show that neural integration can occur even in feedforward networks and describe the properties of this novel class of integrators. We consider a feedforward network consisting of multiple stages that each have a time constant τ with which they linearly filter their inputs. We show that the effective dynamics of this network can be reduced to that of a simple network consisting of a linear chain of neurons with input entering one end and getting successively filtered by each successive stage of the network. As a result of this filtering, later stages of the network have prolonged responses that peak at successively later times. Thus, the network effectively forms a delay-line set of basis functions that are localized in time and that can be flexibly summed to generate a variety of temporal responses. We show analytically that with appropriate choices of synaptic weights, the network can perform a nearly perfect integral of its inputs over a duration of time of order Nτ, where N is the number of stages in the network. We further show that although the performance of the network is best understood in terms of basis functions corresponding to a delay-line, the responses of the actual neurons in the network will generally be linear combinations of these basis functions that may not be easily recognized as originating from dynamics governed by a delay line. The robustness of the network to uniform changes in all synaptic weights can be shown analytically to be similar to that of linear recurrent networks, exhibiting exponential decay of the integrated activity if the weights are too small and exponential growth until signals begin to exit the network if the weights are too large. We show that proper tuning of the weights can be accomplished by a homeostatic learning rule in which neurons scale their intrinsic gain and/or synaptic weights until their activity reaches an average target level over time. In conclusion, this work suggests a novel mechanism for neural integration. Although we focus on its role as an integrator, the network bears strong similarities to previous networks proposed for temporal sequence recognition and production. This suggests that common underlying principles may be relevant to a host of temporal processing computations.
Read full abstract