Abstract

Liquid state machine (LSM) is a type of recurrent spiking network with a strong relationship to neurophysiology and has achieved great success in time series processing. However, the computational cost of simulations and complex dynamics with time dependency limit the size and functionality of LSMs. This paper presents a large-scale bioinspired LSM with modular topology. We integrate the findings on the visual cortex that specifically designed input synapses can fit the activation of the real cortex and perform the Hough transform, a feature extraction algorithm used in digital image processing, without additional cost. We experimentally verify that such a combination can significantly improve the network functionality. The network performance is evaluated using the MNIST dataset where the image data are encoded into spiking series by Poisson coding. We show that the proposed structure can not only significantly reduce the computational complexity but also achieve higher performance compared to the structure of previous reported networks of a similar size. We also show that the proposed structure has better robustness against system damage than the small-world and random structures. We believe that the proposed computationally efficient method can greatly contribute to future applications of reservoir computing.

Highlights

  • Recurrent neural networks (RNNs) have been successful in many fields including time sequence processing (Sak et al, 2014), pattern recognition (Graves et al, 2009), and biology (Maass et al, 2006; Lukosevicius and Jaeger, 2009)

  • The results show that the proposed modular structure has the highest robustness against both input noise and system noise

  • We present the results of 10-class classification tasks with the MNIST dataset, in which the input images are Topology Number of neurons

Read more

Summary

Introduction

Recurrent neural networks (RNNs) have been successful in many fields including time sequence processing (Sak et al, 2014), pattern recognition (Graves et al, 2009), and biology (Maass et al, 2006; Lukosevicius and Jaeger, 2009). The major difference between RNNs and feedforward networks is that the connections in RNNs possess recurrent loops and create time-dependent dynamics. The constraints of hardware and algorithms limit the applications of RNNs. One practical paradigm to overcome the difficulties of RNNs is reservoir computing (RC) proposed by Maass et al (2002a) and Jaeger (2001). The RC paradigm skips gradientdescent training in RNNs and uses a simple readout function to process the states of neurons. Without backpropagation, spiking neuron models can be applied, leading to low power consumption

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call