Abstract

In this letter, we propose a novel neuro-inspired low-resolution online unsupervised learning rule to train the reservoir or liquid of liquid state machines. The liquid is a sparsely interconnected huge recurrent network of spiking neurons. The proposed learning rule is inspired from structural plasticity and trains the liquid through formating and eliminating synaptic connections. Hence, the learning involves rewiring of the reservoir connections similar to structural plasticity observed in biological neural networks. The network connections can be stored as a connection matrix and updated in memory by using address event representation (AER) protocols, which are generally employed in neuromorphic systems. On investigating the pairwise separation property, we find that trained liquids provide 1.36 0.18times more interclass separation while retaining similar intraclass separation as compared to random liquids. Moreover, analysis of the linear separation property reveals that trained liquids are 2.05 0.27times better than random liquids. Furthermore, we show that our liquids are able to retain the generalization ability and generality of random liquids. A memory analysis shows that trained liquids have 83.67 5.79ms longer fading memory than random liquids, which have shown 92.8 5.03ms fading memory for a particular type of spike train inputs. We also throw some light on the dynamics of the evolution of recurrent connections within the liquid. Moreover, compared to separation-driven synaptic modification', a recently proposed algorithm for iteratively refining reservoirs, our learning rule provides 9.30%, 15.21%, and 12.52% more liquid separations and 2.8%, 9.1%, and 7.9% better classification accuracies for 4, 8, and 12 class pattern recognition tasks, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call