Abstract

The Liquid State Machine (LSM) is a powerful recurrent spiking neural network model that provides an appealing paradigm of computation for realizing brain-inspired neural processors. The conventional LSM model incorporates a random fixed recurrent reservoir as a general pre-processing kernel and a trainable readout layer which extracts the firing activities embedded in the reservoir to facilitate pattern recognition. To realize adaptive LSM-based neural processors, we propose a novel Sparse and Self-Organizing LSM (SSO-LSM) architecture with a low-overhead hardware-friendly Spike-Timing Dependent Plasticity (STDP) mechanism for efficient on-chip reservoir tuning. A data-driven optimization flow is presented to implement the targeted STDP rule efficiently in digital logic with extremely low bit resolutions. The proposed STDP rule not only boosts learning performance, but also induces desirable self-organizing behaviors in the reservoir that naturally lead to a sparser recurrent network. Furthermore, the SSO-LSM architecture incorporates a runtime reconfiguration scheme for sparsifying the synaptic connections projected from the reservoir to the readout layer based upon the monitored variances of firing activities in the reservoir. Using the spoken English letters adopted from the TI46 speech corpus as a benchmark, we demonstrate that the SSO-LSM architecture boosts the average learning performance rather significantly by 2.0% while reducing energy dissipation by 25% compared to a baseline LSM design with little extra hardware overhead on a Xilinx Virtex-6 FPGA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call