Abstract

NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

Highlights

  • Reverse engineering the brain is one of the grand engineering challenges of this century

  • While Application-Specific Integrated Circuit (ASIC) have high performance and low power consumption, their architecture is fixed during fabrication and they lack the flexibility to adopt new designs or modifications according to user needs, such as precision of parameters, type of arithmetic representations, and the neuronal or synaptic models to be used

  • Compilation Pipeline To automate the process of running neural models and determine hardware parameters, we develop a pipeline of compilation processes to translate a high-level specification to hardware configuration for the Field-Programmable Gate Arrays (FPGAs) system

Read more

Summary

Introduction

Reverse engineering the brain is one of the grand engineering challenges of this century. The computational capability of brain-style computing is actively being investigated by several projects (Eliasmith et al, 2012; Furber et al, 2014) and new algorithms inspired by the principle of neural computation are being developed (Gütig and Sompolinsky, 2006; Sussillo and Abbott, 2009; Schmidhuber, 2014). A number of computing platforms targeting SNNs such as SpiNNaker (Furber et al, 2013; Sharp et al, 2014), FACETS (Schemmel et al, 2010), Neurogrid (Silver et al, 2007), and TrueNorth (Merolla et al, 2014) have been developed to make large-scale network simulation faster, more energy efficient and more accessible. GPUs provide some speedup over multi-core processors, and have good programmability and flexibility, but they tend to have large power consumption

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.