Abstract
SpiNNaker is a massively parallel distributed architecture primarily focused on real time simulation of spiking neural networks. The largest realization of the architecture consists of one million general purpose processors, making it the largest neuromorphic computing platform in the world at the present time. Utilizing these processors efficiently requires expert knowledge of the architecture to generate executable code and to harness the potential of the unique inter-processor communications infra-structure that lies at the heart of the SpiNNaker architecture. This work introduces a software suite called SpiNNTools that can map a computational problem described as a graph into the required set of executables, application data and routing information necessary for simulation on this novel machine. The SpiNNaker architecture is highly scalable, giving rise to unique challenges in mapping the problem to the machines resources, loading the generated files to the machine and subsequently retrieving the results of simulation. In this paper we describe these challenges in detail and the solutions implemented.
Highlights
With Moore’s Law (Moore, 1965) coming to an end, the use of parallelism is the principle means of continuing the relentless drive toward more and more computing power, leading to a proliferation of distributed and parallel computing platforms
A SpiNNaker machine (Furber et al, 2013) is one such distributed parallel computing platform; SpiNNaker is a highly scalable low-power architecture whose primary application is the simulation of massively-parallel spiking neural networks in real time
This paper describes the functionality of the software stack as of version 4.0.0 of sPyNNaker (Rowley et al, 2017b) and version 4.0.0 of SpiNNakerGraphFrontEnd (Rowley et al, 2017a) and is structured as follows
Summary
With Moore’s Law (Moore, 1965) coming to an end, the use of parallelism is the principle means of continuing the relentless drive toward more and more computing power, leading to a proliferation of distributed and parallel computing platforms These range from computing clusters such as Amazon Web Services (Murty, 2008) and the high throughput Condor platform (Thain et al, 2005), through to crowd sourcing techniques, such as BOINC (Anderson, 2004).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have