Abstract

Neuromorphic processors are hardware dedicated to spiking neural networks (SNNs) to accelerate SNN operations with low-power consumption. Early proposed digital neuromorphic processors define SNN topology in a neuron-centric manner in full support of topology reconfiguration. However, this high reconfigurability comes at the cost of large memory usage, and the state-of-the-art SNN topology barely needs such high reconfigurability as for convolutional SNNs (Conv-SNNs). Further, neuron-centric routing methods hardly allow weight-reuse for Conv-SNNs. To address these concerns, we propose the layer-centric event-routing architecture (LaCERA) that uses layers (or sub-layers) as the granularity of topology unlike neuron-centric routing methods. LaCERA supports the high reconfigurability of Conv-SNN topology and high efficiency in memory usage given the use of lightweight lookup tables for event-routing and high weight-reuse rate. To evaluate LaCERA, we implemented a neuromorphic processor with 32 cores, each of which employs LaCERA, in a field-programmable gate array. The evaluation on the processor level highlights (i) almost ideal weight-reuse rate for Conv-SNNs, (ii) high efficiency in event-routing memory usage, ca. 100× that of Loihi, and (iii) high flexibility of layer partitioning into sub-layers over multiple cores. Further, our neuromorphic processor achieved approximately a 10× improvement in inference speed compared with graphics processing units (TITAN RTX and RTX A6000).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call