Abstract
Spiking neural networks (SNNs) have attracted widespread attention due to their brain-inspired information processing mechanism and low power, sparse accumulation computation on neuromorphic chips. The surrogate gradient method makes it possible to train deep SNNs using backpropagation and shows satisfactory performance on some tasks. However, as the network structure becomes deeper, the spike information may fail to transmit to deeper layers, thus causing the output layer to make wrong predictions in recognition tasks. Inspired by the autaptic structure in the cerebral cortex, which is formed by axons connecting to their own dendrites and capable of modulating neuronal activity, we use discrete memristors to build feedback-connected autapses to adaptively regulate the precision of the spikes. Further, to prevent outlier at a certain time step from affecting the overall output, we distill the averaged knowledge into sub-models at each time step to correct potential errors. By combining these two proposed methods, we propose a deep SNNs optimized by Leaky Integrate-and-Fire (LIF) model with memristive autapse and temporal distillation, referred to as MA-SNN. A series of experiments on static datasets (CIFAR10 and CIFAR100) as well as neuromorphic datasets (DVS-CIFAR10 and N-Caltech101) demonstrated the competitiveness of the proposed model and validated the effectiveness of its components. Code for MA-SNN is available at: https://github.com/CHNtao/MA-SNN.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.