Abstract

Continuous attractor neural networks (CANNs) have been widely used as a canonical model for neural information representation. It remains, however, unclear how the neural system acquires such a network structure in practice. In the present study, we propose a biological plausible scheme for the neural system to learn a CANN from real images. The scheme contains two key issues. One is to generate high-level representations of objects, such that the correlation between neural representations reflects the sematic relationship between objects. We adopt a deep neural network trained by a large number of natural images to achieve this goal. The other is to learn correlated memory patterns in a recurrent neural network. We adopt a modified Hebb rule, which encodes the correlation between neural representations into the connection form of the network. We carry out a number of experiments to demonstrate that when the presented images are linked by a continuous feature, the neural system learns a CANN successfully, in term of that these images are stored as a continuous family of stationary states of the network, forming a sub-manifold of low energy in the network state space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call