Abstract

Understanding how neurons perform, when they are organized in interacting networks, is a key to understanding how the brain performs complex functions. Different models that approximate the behavior of interconnected neurons have been proposed in the literature. Implementing these models to simulate neuron behavior at an appropriately detailed level to observe collective phenomena is computationally intensive. In this study we analyze the coupled Leaky Integrate-and-Fire model and report on the issues that affect performance when the model is implemented on a GPU. We conclude that the problem is heavily memory-bound. Advances in memory technology at the hardware level seem to be the deciding factor to achieve better performance on the GPU. Our results show that using an NVidia K40 GPU a modest 2x speedup can be achieved compared to a parallel implementation running on a modern multi-core CPU. However, a substantial speedup of 11.1x can be achieved using an NVidia V100 GPU, mainly due to the improvements in its memory subsystem.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call