Abstract

Spiking Neural Networks (SNN) are fast emerging as an alternative option to Deep Neural Networks (DNN). They are computationally more powerful and provide higher energy-efficiency than DNNs. While exciting at first glance, SNNs contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that can be exploited by the adversaries. We explore global fault injection attacks using external power supply and laser-induced local power glitches on SNN designed using common analog neurons to corrupt critical training parameters such as spike amplitude and neuron’s membrane threshold potential. We also analyze the impact of power-based attacks on the SNN for digit classification task and observe a worst-case classification accuracy degradation of −85.65%. We explore the impact of various design parameters of SNN (e.g., learning rate, spike trace decay constant, and number of neurons) and identify design choices for robust implementation of SNN. We recover classification accuracy degradation by 30–47% for a subset of power-based attacks by modifying SNN training parameters such as learning rate, trace decay constant, and neurons per layer. We also propose hardware-level defenses, e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area, and 25% power overhead. We also propose a dummy neuron-based detection of voltage fault injection at ∼1% power and area overhead each.

Highlights

  • Artificial Neural Networks (ANNs or NNs) that are inspired by the functionality of human brains consist of layers of neurons that are interconnected through synapses and can be used to approximate any computable function

  • Spiking Neural Networks (SNN) are emerging as an alternative to Deep Neural Networks (DNNs) since they are biologically plausible, computationally powerful (Heiberg et al, 2018), and energy-efficient (Merolla et al, 2014; Davies et al, 2018; Tavanaei et al, 2019)

  • We have analyzed the impact of power-based attacks on integrate-and-fire CMOS-based neurons, that are most commonly employed for contemporary SNN architectures

Read more

Summary

Introduction

Artificial Neural Networks (ANNs or NNs) that are inspired by the functionality of human brains consist of layers of neurons that are interconnected through synapses and can be used to approximate any computable function. An attack on a neural network can lead to undesirable or unsafe decisions in real-world applications (e.g., reduced accuracy or confidence in road sign identification during autonomous driving). These attacks can be initiated at either the production, training, or final application phases. The critical parameters for SNN operation include the timing of the spikes and the strengths of the synaptic weights between neurons. When this membrane potential reaches a pre-designed threshold, the neuron fires an output spike Various neuron models such as I&F, Hodgkin-Huxley, and spike response exist with different membrane and spike-generation operations. We have implemented two flavors of I&F neuron to showcase the power-based attacks

Methods
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.