Abstract

Brain-inspired hyperdimensional computing (HDC), also known as Vector Symbolic Architecture (VSA), is an emerging “non-von Neumann” computing scheme that imitates human brain functions to process information or perform learning tasks using abstract and high-dimensional patterns. Compared with deep neural networks (DNNs), HDC shows advantages such as compact model size, energy efficiency, and few-shot learning. Despite of those advantages, one under-investigated area of HDC is the adversarial robustness; existing works have shown that HDC is vulnerable to adversarial attacks where attackers can add minor perturbations onto the original inputs to “fool” HDC models, producing wrong predictions. In this paper, we systematically study the adversarial robustness of HDC by developing a systematic approach to test and enhance the robustness of HDC against adversarial attacks with two main components: (1), which is a highly-automated testing tool that can generate high-quality adversarial data for a given HDC model; and (2), which utilizes the adversarial data generated by to enhance the adversarial robustness of HDC models. The core idea of is built on top of fuzz testing method. We customize the fuzzing approach by proposing a similarity-based coverage metric to guide to continuously mutate original inputs to generate new inputs that can trigger incorrect behaviors of HDC model. Thanks to the use of differential testing, does not require knowing the labels of the samples beforehand. For enhancing the adversarial robustness, we design, implement, and evaluate to defend HDC models against adversarial data. The core idea of is an adversarial detector which can be trained by -generated adversarial samples. During inference, once an adversarial sample is detected, will override the prediction result with an “invalid” signal. We evaluate the proposed methods on 4 datasets and 5 adversarial attack scenarios with 6 adversarial generation strategies and 2 defense mechanisms, and compare the performance correspondingly. is able to differentiate between benign and adversarial inputs with over 90% accuracy, which is up to 55% higher than adversarial training-based baselines. To the best of our knowledge, this paper presents the first comprehensive effort in systematically testing and enhancing the robustness against adversarial data of this emerging brain-inspired computational model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call