Abstract

Intrinsic plasticity (IP) is a non-Hebbian learning mechanism that self-adapts intrinsic parameters of each neuron as opposed to synaptic weights, offering complimentary opportunities for learning performance improvement. However, integrating IP onchip to enable per-neuron self-adaptation can lead to very large design overheads. This paper is the first work exploring efficient on-chip non-Hebbian IP learning for neural accelerators based on the recurrent spiking neural network model of the liquid state machine (LSM). The proposed LSM neural processor integrated with onchip IP is improved in terms of cost-effectiveness from both algorithmic and hardware design points of view. We optimize a baseline IP rule, which gives the state-of-the-art learning performance, to enable a feasible hardware onchip integration and further propose a new hardware-friendly IP rule SpiKL-IFIP. The hardware LSM neural accelerator with onchip IP is dramatically improved in area/power overhead as well as training latency with the proposed new IP rule and its optimized implementation. On the Xilinx ZC706 FPGA board, the proposed co-optimization dramatically improves the cost-effectiveness of on-chip IP. Self-adapting reservoir neurons using IP boosts the classification accuracy by up to 10.33% on the TI46 speech corpus and 8% on the TIMIT acoustic-phonetic dataset. Moreover, the proposed techniques reduce training energy by up to 49.6% and resource utilization by up to 64.9% while gracefully trading off classification accuracy for design efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call