Abstract

Spiking Neural Networks (SNNs) are powerful computation engines for pattern recognition and image classification applications. Apart from application performance such as recognition and classification accuracy, system performance such as throughput becomes important when executing these applications on a hardware. We propose a systematic design-flow to map SNN-based applications on a crossbar-based neuromorphic hardware, guaranteeing application as well as system performance. Synchronous Dataflow Graphs (SDFGs) are used to model these applications with extended semantics to represent neural network topologies. Self-timed scheduling is then used to analyze throughput, incorporating hardware constraints such as synaptic memory, communication and I/O bandwidth of crossbars. Our design-flow integrates CARLsim, a GPU-accelerated application-level SNN simulator with SDF3, a tool for mapping SDFG on hardware. We conducted experiments with realistic and synthetic SNNs on representative neuromorphic hardware, demonstrating throughput-resource trade-offs for a given application performance. For throughput-constrained applications, we show average 20% reduction of hardware usage with 19% reduction in energy consumption. For throughput-scalable applications, we show an average 53% higher throughput compared to a state-of-the-art approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call