Abstract

The brain-inspired Spiking neural networks (SNN) claim to present advantages for visual classification tasks in terms of energy efficiency and inherent robustness. In this work, we explore the impact on network inter-layer sparsity through neural coding schemes and the intrinsic structural parameters of Leaky Integrate-and-Fire (LIF) neurons, which can be a candidate metric for performance evaluation. Towards this, we perform a comparative study of four critical neural coding schemes: rate coding (poisson coding), latency coding, phase coding, and direct coding, as well as 6 LIF neuron intrinsic parameter options for a total of 24 combined parameter schemes. Specifically, the models were trained using a supervised training algorithm with a surrogate gradient, and two adversarial attacks, Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) were applied on a CIFAR10 dataset. We identified the sources of interlayer sparsity in SNN, and quantitatively analyzed the differences in sparsity caused by coding schemes, neuron leakage factors and thresholds. Various aspects of network performance were thoroughly considered in this paper, including inference accuracy, adversarial robustness, and energy efficiency. Our results show that latency coding is the optimum choice in achieving the highest adversarial robustness and energy efficient against low intensity attacks, while rate coding offers the best adversarial robustness against medium and high intensity attacks. The maximum deviations of robustness and efficiency between different coding schemes are 9.35% in VGG5 and 13.59% in VGG9. Increasing the sparsity of spike activity by improving the threshold can bring a short-lived adversarial robustness sweet spot, while excessive sparsity due to changes in threshold and leakage can instead reduce the adversarial robustness. The study reveals the advantages and disadvantages, and design space of SNN in various dimensions, allowing researchers to frame their neuromorphic systems in terms of the coding methods, neuron inherent structure, and model learning capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call