Abstract

Spiking neural networks (SNNs) have captivated the attention worldwide owing to their compelling advantages in low power consumption, high biological plausibility, and strong robustness. However, the intrinsic latency associated with SNNs during inference poses a significant challenge, impeding their further development and application. This latency is caused by the need for spiking neurons to collect electrical stimuli and generate spikes only when their membrane potential exceeds a firing threshold. Considering the firing threshold plays a crucial role in SNN performance, this article proposes a self-driven adaptive threshold plasticity (SATP) mechanism, wherein neurons autonomously adjust the firing thresholds based on their individual state information using unsupervised learning rules, of which the adjustment is triggered by their own firing events. SATP is based on the principle of maximizing the information contained in the output spike rate distribution of each neuron. This article derives the mathematical expression of SATP and provides extensive experimental results, demonstrating that SATP effectively reduces SNN inference latency, further reduces the computation density while improving computational accuracy, so that SATP facilitates SNN models to be with low latency, sparse computing, and high accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.