Abstract
Two studies in this issue of Neuron challenge widely held assumptions about the role of positive feedback in recurrent neuronal networks. Goldman shows that such feedback is not necessary for memory maintenance in a neural integrator, and Murphy and Miller show that it is not necessary for amplification of orientation patterns in V1. Both suggest that seemingly recurrent networks can be feedforward in disguise.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have