Abstract
Abstract Spiking neural networks (SNNs) express higher biological plausibility and excel at learning spatiotemporal features while consuming less energy than conventional artificial neural networks, particularly on neuromorphic hardware. The leaky integrate-and-fire (LIF) neuron stands out as one of the most widely used spiking neurons in deep learning. However, its sequential information processing leads to slow training on lengthy sequences, presenting a critical challenge for real-world applications that rely on extensive datasets. This paper introduces the parallelizable LIF (ParaLIF) neuron, which accelerates SNNs by parallelizing their simulation over time, for both feedforward and recurrent architectures. Compared to LIF in neuromorphic speech, image and gesture classification tasks, ParaLIF demonstrates speeds up to 200 times faster and, on average, achieves greater accuracy with similar sparsity. When integrated into state-of-the-art architectures, ParaLIF’s accuracy matches or exceeds the highest performance reported in the literature on various neuromorphic datasets. These findings highlight ParaLIF as a promising approach for the development of rapid, accurate and energy-efficient SNNs, particularly well-suited for handling massive datasets containing long sequences.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have