Rethinking adversarial attacks on neuromorphic models

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Spiking Neural Networks (SNN) are biologically inspired Artificial Neural Networks (ANN) that emulate the behaviour of biological neurons in spiking-based computational units. However, machine learning (ML) models are known to be vulnerable to adversarial noise, and particularly to universal adversarial perturbations (UAP) and adversarial patch (AP) attacks. Despite the claimed inherent robustness of SNNs to adversarial noise, attacks with UAP and AP remain under-explored in the spiking domain.
This paper revisits the adversarial noise generation method from its first principles. Specifically, we consider a realistic spiking-aware setting that takes into account constraints from the neuromorphic domain, such as event sparsity and spike-timing integrity. We introduce our approach for creating Spiking-compatible adversarial attacks and a spiking UAP and AP destined for event-based computer vision systems.
We propose a novel, efficient spike-based adversarial noise generation approach that respects neuromorphic constraints and show that SNNs can be the victims of more tangible and realistic types of attack.

Similar Papers
  • Research Article
  • Cite Count Icon 4
  • 10.56553/popets-2025-0060
Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study
  • Apr 1, 2025
  • Proceedings on Privacy Enhancing Technologies
  • Ayana Moshruba + 2 more

While machine learning (ML) models are becoming mainstream, including in critical application domains, concerns have been raised about the increasing risk of sensitive data leakage. Various privacy attacks, such as membership inference attacks (MIAs), have been developed to extract data from trained ML models, posing significant risks to data confidentiality. While the predominant work in the ML community considers traditional Artificial Neural Networks (ANNs) as the default neural model, neuromorphic architectures, such as Spiking Neural Networks (SNNs), have recently emerged as an attractive alternative mainly due to their significantly low power consumption. These architectures process information through discrete events, i.e., spikes, to mimic the functioning of biological neurons in the brain. While the privacy issues have been extensively investigated in the context of traditional ANNs, they remain largely unexplored in neuromorphic architectures, and little work has been dedicated to investigating their privacy-preserving properties. In this paper, we investigate the question of whether SNNs have inherent privacy-preserving advantages. Specifically, we investigate SNNs’ privacy properties through the lens of MIAs across diverse datasets, in comparison with ANNs. We explore the impact of different learning algorithms (surrogate gradient and evolutionary learning), programming frameworks (snnTorch, TENNLab, and LAVA), and various parameters on the resilience of SNNs against MIA. Our experiments reveal that SNNs demonstrate consistently superior privacy preservation compared to ANNs, with evolutionary algorithms further enhancing their resilience. For example, on the CIFAR-10 dataset, SNNs achieve an AUC as low as 0.59 compared to 0.82 for ANNs, and on CIFAR-100, SNNs maintain a low AUC of 0.58, whereas ANNs reach 0.88. Furthermore, we investigate the privacy-utility trade-off through Differentially Private Stochastic Gradient Descent (DPSGD), observing that SNNs incur a notably lower accuracy drop than ANNs under equivalent privacy constraints.

  • Research Article
  • Cite Count Icon 52
  • 10.1016/j.patcog.2020.107584
Universal adversarial perturbations against object detection
  • Aug 10, 2020
  • Pattern Recognition
  • Debang Li + 2 more

Universal adversarial perturbations against object detection

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 43
  • 10.3389/fnins.2021.756876
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
  • Nov 4, 2021
  • Frontiers in Neuroscience
  • Fangxin Liu + 5 more

Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 25~32× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.3~37.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP.

  • Conference Article
  • Cite Count Icon 32
  • 10.1109/ijcnn48605.2020.9207297
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
  • Jul 1, 2020
  • Alberto Marchisio + 5 more

Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: ``Are SNNs secure?'' Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.

  • Research Article
  • Cite Count Icon 17
  • 10.1097/corr.0000000000001679
CORR Synthesis: When Should the Orthopaedic Surgeon Use Artificial Intelligence, Machine Learning, and Deep Learning?
  • Feb 17, 2021
  • Clinical orthopaedics and related research
  • Michael P Murphy + 1 more

CORR Synthesis: When Should the Orthopaedic Surgeon Use Artificial Intelligence, Machine Learning, and Deep Learning?

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/itnt52450.2021.9649179
Robustness of spiking neural networks against adversarial attacks
  • Sep 20, 2021
  • Mikhail Leontev + 2 more

Artificial neural networks (ANNs) are susceptible to adversarial attacks and misclassify images even with their slight modification. On the other hand, biological neural networks are known to be robust against adversarial attacks. Spiking neural networks (SNNs) are closer in their organization to biological networks. Hence, it is expected that SNNs are less susceptible to adversarial attacks than analog ANNs. We investigate some aspects of the adversarial robustness of analog and spiking artificial neural networks and their selectivity with respect to unknown inputs. Two different classes of SNNs were tested. The first class of spiking neural networks (rate-based SNNs) was obtained by direct conversion of analog ANNs. The second class of SNNs had latency-based information coding and was trained from scratch using biologically plausible local learning rules. The NULL class method was tested as a way to increase the selectivity of neural networks. We tested the susceptibility of different SNNs modalities to the adversarial examples and the transferability of adversarial samples between analog ANNs and SNNs. We found that coding information in spikes does not make SNNs immune to adversarial attacks. Latency-based SNNs, but not rate-based SNNs, are found more resistant to the adversarial samples produced for analog ANNs.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 8
  • 10.1109/tcsii.2022.3184313
Robustness of Spiking Neural Networks Based on Time-to-First-Spike Encoding Against Adversarial Attacks
  • Sep 1, 2022
  • IEEE Transactions on Circuits and Systems II: Express Briefs
  • Osamu Nomura + 3 more

Spiking neural networks (SNNs) more closely mimic the human brain than artificial neural networks (ANNs). For SNNs, time-to-first-spike (TTFS) encoding, which represents the output values of neurons based on the timing of a single spike, has been proposed as a promising model to reduce power consumption. Adversarial attacks that can lead ANNs to misrecognize images have been reported in many studies. However, the characteristics of TTFS-based SNNs trained using a backpropagation algorithm against adversarial attacks have not yet been clarified. In particular, the dependence of the robustness against adversarial attacks on spike timings has not been investigated. In this brief, we investigated the robustness of SNNs against adversarial attacks and compared it with that of an ANN. We found that SNNs trained with the appropriate temporal penalty settings are more robust against adversarial images than ANNs.

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.patrec.2023.03.001
Consistent attack: Universal adversarial perturbation on embodied vision navigation
  • Mar 7, 2023
  • Pattern Recognition Letters
  • Chengyang Ying + 5 more

Consistent attack: Universal adversarial perturbation on embodied vision navigation

  • Conference Article
  • Cite Count Icon 49
  • 10.1109/ijcnn.2019.8851732
A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks
  • Jul 1, 2019
  • Saima Sharmin + 5 more

In this era of machine learning models, their functionality is being threatened by adversarial attacks. In the face of this struggle for making artificial neural networks robust, finding a model, resilient to these attacks, is very important. In this work, we present, for the first time, a comprehensive analysis of the behavior of more bio-plausible networks, namely Spiking Neural Network (SNN) under state-of-the-art adversarial tests. We perform a comparative study of the accuracy degradation between conventional VGG-9 Artificial Neural Network (ANN) and equivalent spiking network with CIFAR-10 dataset in both whitebox and blackbox setting for different types of single-step and multi-step FGSM (Fast Gradient Sign Method) attacks. We demonstrate that SNNs tend to show more resiliency compared to ANN under blackbox attack scenario. Additionally, we find that SNN robustness is largely dependent on the corresponding training mechanism. We observe that SNNs trained by spike-based backpropagation are more adversarially robust than the ones obtained by ANN-to-SNN conversion rules in several whitebox and blackbox scenarios. Finally, we also propose a simple, yet, effective framework for crafting adversarial attacks from SNNs. Our results suggest that attacks crafted from SNNs following our proposed method are much stronger than those crafted from ANNs.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.displa.2023.102479
Exploring aesthetic procedural noise for crafting model-agnostic universal adversarial perturbations
  • Jul 6, 2023
  • Displays
  • Jun Yan + 3 more

Exploring aesthetic procedural noise for crafting model-agnostic universal adversarial perturbations

  • Conference Article
  • Cite Count Icon 46
  • 10.1109/iccv48922.2021.00516
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training with Crafted Input Noise
  • Oct 1, 2021
  • Souvik Kundu + 2 more

Low-latency deep spiking neural networks (SNNs) have become a promising alternative to conventional artificial neural networks (ANNs) because of their potential for increased energy efficiency on event-driven neuromorphic hardware. Neural networks, including SNNs, however, are subject to various adversarial attacks and must be trained to remain resilient against such attacks for many applications. Nevertheless, due to prohibitively high training costs associated with SNNs, an analysis and optimization of deep SNNs under various adversarial attacks have been largely overlooked. In this paper, we first present a detailed analysis of the inherent robustness of low-latency SNNs against popular gradient-based attacks, namely fast gradient sign method (FGSM) and projected gradient descent (PGD). Motivated by this analysis, to harness the model’s robustness against these attacks we present an SNN training algorithm that uses crafted input noise and incurs no additional training time. To evaluate the merits of our algorithm, we conducted extensive experiments with variants of VGG and ResNet on both CIFAR-10 and CIFAR-100 dataset. Compared to standard trained direct-input SNNs, our trained models yield improved classification accuracy of up to 13.7% and 10.1% on FGSM and PGD attack generated images, respectively, with negligible loss in clean image accuracy. Our models also outperform inherently-robust SNNs trained on rate-coded inputs with improved or similar classification performance on attack-generated images while having up to 25× and ∼4.6× lower latency and computation energy, respectively. For reproducibility, we have open-sourced the code at github.com/ksouvik52/hiresnn2021.

  • Research Article
  • Cite Count Icon 21
  • 10.1038/s41467-024-51110-5
High-performance deep spiking neural networks with 0.3 spikes per neuron
  • Aug 9, 2024
  • Nature Communications
  • Ana Stanojevic + 5 more

Communication by rare, binary spikes is a key factor for the energy efficiency of biological brains. However, it is harder to train biologically-inspired spiking neural networks than artificial neural networks. This is puzzling given that theoretical results provide exact mapping algorithms from artificial to spiking neural networks with time-to-first-spike coding. In this paper we analyze in theory and simulation the learning dynamics of time-to-first-spike-networks and identify a specific instance of the vanishing-or-exploding gradient problem. While two choices of spiking neural network mappings solve this problem at initialization, only the one with a constant slope of the neuron membrane potential at threshold guarantees the equivalence of the training trajectory between spiking and artificial neural networks with rectified linear units. For specific image classification architectures comprising feed-forward dense or convolutional layers, we demonstrate that deep spiking neural network models can be effectively trained from scratch on MNIST and Fashion-MNIST datasets, or fine-tuned on large-scale datasets, such as CIFAR10, CIFAR100 and PLACES365, to achieve the exact same performance as that of artificial neural networks, surpassing previous spiking neural networks. Our approach accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation. We also show that fine-tuning spiking neural networks with our robust gradient descent algorithm enables their optimization for hardware implementations with low latency and resilience to noise and quantization.

  • Research Article
  • Cite Count Icon 3
  • 10.14704/web/v19i1/web19001
Modelling an Adaptive Learning System Using Artificial Intelligence
  • Dec 24, 2021
  • Webology
  • Hayder Rahm Dakheel Al-Fayyadh + 2 more

The goal of this paper is to use artificial intelligence to build and evaluate an adaptive learning system where we adopt the basic approaches of spiking neural networks as well as artificial neural networks. Spiking neural networks receive increasing attention due to their advantages over traditional artificial neural networks. They have proven to be energy efficient, biological plausible, and up to 105 times faster if they are simulated on analogue traditional learning systems. Artificial neural network libraries use computational graphs as a pervasive representation, however, spiking models remain heterogeneous and difficult to train. Using the artificial intelligence deductive method, the paper posits two hypotheses that examines whether 1) there exists a common representation for both neural networks paradigms for tutorial mentoring, and whether 2) spiking and non-spiking models can learn a simple recognition task for learning activities for adaptive learning. The first hypothesis is confirmed by specifying and implementing a domain-specific language that generates semantically similar spiking and non-spiking neural networks for tutorial mentoring. Through three classification experiments, the second hypothesis is shown to hold for non-spiking models, but cannot be proven for the spiking models. The paper contributes three findings: 1) a domain-specific language for modelling neural network topologies in adaptive tutorial mentoring for students, 2) a preliminary model for generalizable learning through back-propagation in spiking neural networks for learning activities for students also represented in results section, and 3) a method for transferring optimised non-spiking parameters to spiking neural networks has also been developed for adaptive learning system. The latter contribution is promising because the vast machine learning literature can spill-over to the emerging field of spiking neural networks and adaptive learning computing. Future work includes improving the back-propagation model, exploring time-dependent models for learning, and adding support for adaptive learning systems.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.3390/a17040156
Spike-Weighted Spiking Neural Network with Spiking Long Short-Term Memory: A Biomimetic Approach to Decoding Brain Signals
  • Apr 12, 2024
  • Algorithms
  • Kyle Mcmillan + 4 more

Background. Brain–machine interfaces (BMIs) offer users the ability to directly communicate with digital devices through neural signals decoded with machine learning (ML)-based algorithms. Spiking Neural Networks (SNNs) are a type of Artificial Neural Network (ANN) that operate on neural spikes instead of continuous scalar outputs. Compared to traditional ANNs, SNNs perform fewer computations, use less memory, and mimic biological neurons better. However, SNNs only retain information for short durations, limiting their ability to capture long-term dependencies in time-variant data. Here, we propose a novel spike-weighted SNN with spiking long short-term memory (swSNN-SLSTM) for a regression problem. Spike-weighting captures neuronal firing rate instead of membrane potential, and the SLSTM layer captures long-term dependencies. Methods. We compared the performance of various ML algorithms during decoding directional movements, using a dataset of microelectrode recordings from a macaque during a directional joystick task, and also an open-source dataset. We thus quantified how swSNN-SLSTM performed compared to existing ML models: an unscented Kalman filter, LSTM-based ANN, and membrane-based SNN techniques. Result. The proposed swSNN-SLSTM outperforms both the unscented Kalman filter, the LSTM-based ANN, and the membrane based SNN technique. This shows that incorporating SLSTM can better capture long-term dependencies within neural data. Also, our proposed swSNN-SLSTM algorithm shows promise in reducing power consumption and lowering heat dissipation in implanted BMIs.

  • Book Chapter
  • 10.1007/978-981-15-6401-7_12-1
Architectures for Machine Learning
  • Jan 1, 2022
  • Yongkui Yang + 2 more

The term “artificial intelligence (AI)” was coined in 1956, and its development has undergone periods of extreme hype and periods of strong disillusionment since then. Today, AI has received tremendous attention from both academia and industry, and it will remain one of the hottest topics in the foreseeable future. A subset of AI named machine learning (ML) has achieved great success throughout a huge variety of fields, such as computer vision, natural language processing, and computer gaming. ML was first proposed to endow machine the ability to imitate the learning process of the human brain using neuromorphic models. However, the modelling complexity and limited computing capabilities of machines hindered the development of ML in its early days. Benefiting from the ever-growing computing power and availability of digital data, ML has adopted both bio-inspired spiking neural network (SNN), or neuromorphic computing, and practical artificial neural network (ANN), which have become two of the top trending methods with outstanding results.This chapter gives a brief overview of the state-of-the-art architectures and circuits for ML. On the one hand, neuromorphic computing architectures and accelerators are investigated, including bio-inspired computational models and learning methods, microarchitecture, circuit-level design considerations, and prominent neuromorphic chips. On the other hand, architectures for ANNs are outlined, including essential design metrics on ANN accelerators and various state-of-the-art ANN architectures and circuits.KeywordsMachine learningNeuromorphic computingSpiking neural networkArtificial neural networkComputer architectureVLSIDomain-specific computing

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.