Abstract

Spiking Neural Networks (SNNs) are increasingly deployed in applications on resource constraint embedding systems due to their low power. Unfortunately, SNNs are vulnerable to adversarial examples which threaten the application security. Existing denoising filters can protect SNNs from adversarial examples. However, the reason why filters can defend against adversarial examples remains unclear and thus it cannot ensure a trusty defense. In this work, we aim to explain the reason and provide a more robust filter against different adversarial examples. First, we propose two new norms l0 and l∞ to describe the spatial and temporal features of adversarial events for understanding the working principles of filters. Second, we propose to combine filters to provide a robust defense against different perturbation events. To make up the gap between the goal and the ability of existing filters, we propose a new filter that can defend against both spatially and temporally dense perturbation events. We conduct the experiments on two widely used neuromorphic datasets, NMNIST and IBM DVSGesture. Experimental results show that the combined defense can restore the accuracy to over 80% of the original SNN accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call