Abstract

Adversarial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings and developing techniques accordingly to make learning robust to adversarial manipulations. It plays a vital role in various machine learning applications and has attracted tremendous attention across different communities recently. In this paper, we explore different adversarial scenarios in the context of quantum machine learning. We find that, similar to traditional classifiers based on classical neural networks, quantum learning systems are likewise vulnerable to crafted adversarial examples, independent of whether the input data is classical or quantum. In particular, we find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples obtained via adding imperceptible perturbations to the original legitimate samples. This is explicitly demonstrated with quantum adversarial learning in different scenarios, including classifying real-life images (e.g., handwritten digit images in the dataset MNIST), learning phases of matter (such as, ferromagnetic/paramagnetic orders and symmetry protected topological phases), and classifying quantum data. Furthermore, we show that based on the information of the adversarial examples at hand, practical defense strategies can be designed to fight against a number of different attacks. Our results uncover the notable vulnerability of quantum machine learning systems to adversarial perturbations, which not only reveals a novel perspective in bridging machine learning and quantum physics in theory but also provides valuable guidance for practical applications of quantum classifiers based on both near-term and future quantum technologies.

Highlights

  • The interplay between machine learning and quantum physics may lead to unprecedented perspectives for both fields [1]

  • Our results uncover the notable vulnerability of quantum machine learning systems to adversarial perturbations, which reveals another perspective in bridging machine learning and quantum physics in theory and provides valuable guidance for practical applications of quantum classifiers based on both near-term and future quantum technologies

  • Similar to traditional classifiers based on classical neural networks, quantum classifiers are likewise vulnerable to carefully crafted adversarial examples, which are obtained by adding imperceptible perturbations to the legitimate input data

Read more

Summary

INTRODUCTION

The interplay between machine learning and quantum physics may lead to unprecedented perspectives for both fields [1]. We carry out extensive numerical simulations for several concrete examples, which cover different scenarios with diverse types of data (including handwritten digit images in the dataset MNIST, simulated time-of-flight images in a coldatom experiment, and quantum data from a one-dimensional transverse field Ising model) and different attack strategies (such as fast gradient sign method [32], basic iterative method [27], momentum iterative method [35], and projected gradient descent [32] in the white-box attack setting, and transferattack method [70] and zeroth-order optimization [33] in the black-box attack setting, etc.) to obtain the adversarial perturbations. Our results shed light on the fledgling field of quantum machine learning by uncovering the vulnerability aspect of quantum classifiers with comprehensive numerical simulations, which will provide valuable guidance for practical applications of using quantum classifiers to solve intricate problems where adversarial considerations are inevitable

CLASSICAL ADVERSARIAL LEARNING AND QUANTUM CLASSIFIERS
VULNERABILITY OF QUANTUM CLASSIFIERS
Quantum classifiers
Quantum adversarial learning images
White-box attack
50 Epoch target 1 target 3 target 7 target 9
Black-box attack
Adversarial perturbations are not random noises
Larger models are more robust
Quantum adversarial learning topological phases of matter
Adversarial learning quantum data
DEFENSE
CONCLUSION AND OUTLOOK
White-box attacks
Findings
Black-box attacks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call