Abstract

Event Abstract Back to Event The performance of solvers for integrate-and-fire models with exact spike timing Markus Diesmann1, 2*, Alexander Hanuschkin2, Suzanne Kunkel2, Moritz Helias2 and Abigail Morrison1 1 RIKEN Brain Science Institute, Japan 2 BCCN Freiburg, Germany Discrete time neuronal network simulation strategies typically constrain spike times to a grid determined by the computational step size. This approach can have the effect of introducing artificial synchrony [1]. However, continuous time (i.e. event-driven) approaches can be computationally demanding, both with respect to calculating future spike times and to event management, particularly for large network sizes. To address this problem, Morrison et al. [2] presented a general method of handling off-grid spiking in combination with exact subthreshold integration in globally time-driven simulations [3,4]. Within each time step the neuron model emulates an event-driven environment to process incoming spikes, whereas the timing of outgoing spikes is based on interpolation. Therefore, the computation step size is a decisive factor for both integration error and simulation time. An alternative approach for calculating the exact spike times of integrate-and-fire neurons with exponential currents was recently published by Brette [5]. The problem of accurate detection of the first threshold crossing of the membrane potential is converted into finding the largest root of a polynomial. Common numerical means like Descartes' rule and Sturm's theorem are applicable. Although this approach was developed in the context of event-driven simulations, we take advantage of its ability to predict future threshold crossings in the time-driven environment of NEST [3]. We compare the accuracy of the two approaches in single-neuron simulations and the efficiency in a balanced random network of 10,000 neurons [6]. We show that the network simulation time when using the polynomial method depends only weakly on the computational step size, and the single neuron integration error is independent of it. Although the polynomial method attains the maximum precision expected from double numerics for all input rates and computation step sizes, the interpolation method is more efficient for input rates above a critical value. These results suggest that the cost of processing incoming spikes, rather than the calculation of outgoing spikes, is the dominant factor. Therefore, we extend the model of Morrison et al. [2] by replacing the interpolation of threshold crossings with the computationally more expensive, but numerically more exact, Newton-Raphson technique. The resulting implementation achieves the maximum precision at all computation step sizes and is more efficient for all input and output rates than either the interpolation or the polynomial implementation. Acknowledgements: Partially funded by DIP F1.2, BMBF Grant 01GQ0420 to the Bernstein Center for Computational Neuroscience Freiburg, and EU Grant 15879 (FACETS).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.