Abstract

Contrary to Hopfield-like networks, random recurrent neural networks (RRNN), where the couplings are random, exhibit complex dynamics (limit cycles, chaos). It is possible to store information in these networks through Hebbian learning. Eventually, learning “destroys” the dynamics and leads to a fixed point attractor. We investigate here the structural changes occurring in the network through learning. We show that a simple Hebbian learning rule organizes synaptic weight redistribution on the network from an initial homogeneous and random distribution to a heterogeneous one, where strong synaptic weights preferentially assemble in triangles. Hence learning organizes the network of the large synaptic weights as a “small-world” one

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call