Abstract

A key problem in the field of quantum computing is understanding whether quantum machine learning (QML) models implemented on noisy intermediate-scale quantum (NISQ) machines can achieve quantum advantages. Recently, Huang et al. [Nat Commun 12, 2631] partially answered this question by the lens of quantum kernel learning. Namely, they exhibited that quantum kernels can learn specific datasets with lower generalization error over the optimal classical kernel methods. However, most of their results are established on the ideal setting and ignore the caveats of near-term quantum machines. To this end, a crucial open question is: does the power of quantum kernels still hold under the NISQ setting? In this study, we fill this knowledge gap by exploiting the power of quantum kernels when the quantum system noise and sample error are considered. Concretely, we first prove that the advantage of quantum kernels is vanished for large size of datasets, few number of measurements, and large system noise. With the aim of preserving the superiority of quantum kernels in the NISQ era, we further devise an effective method via indefinite kernel learning. Numerical simulations accord with our theoretical results. Our work provides theoretical guidance of exploring advanced quantum kernels to attain quantum advantages on NISQ devices.

Highlights

  • Kernel methods provide powerful framework to perform nonlinear and nonparametric learning, attributed to their universal property and interpretability [9, 22, 45]

  • We investigate the generalization performance of quantum kernels under the noisy intermediate-scale quantum (NISQ) setting

  • We theoretically exhibit that a large size of the training dataset, a small number of measurement shots, and a large amount of quantum system noise can destroy the superiority of quantum kernels

Read more

Summary

Introduction

Kernel methods provide powerful framework to perform nonlinear and nonparametric learning, attributed to their universal property and interpretability [9, 22, 45]. A central theoretical contribution of this paper is exhibiting that a larger data size n, a higher system noise p, and a fewer number of measurements m will make the generalization advantage of quantum kernels inconclusive. This result indicates a negative conclusion of using quantum kernels implemented on NISQ devices to tackle large-scale learning tasks with evident advantages, which is contradicted with the claim of the study [24] such that a larger data size n promises a better generalization error. Our work opens up a promising avenue to combine classical indefinite kernel learning methods with quantum kernels to attain quantum advantages in the NISQ era

Quantum kernels in the NISQ scenario
Enhance performance of noisy quantum kernels
Conclusion
A The summary of notation
B The results of quantum kernels under the ideal setting
C Proof of Theorem 1
Proof of Lemma 2
Proof of Lemma 3
Proof of Lemma 4
Proof of Lemma 5
Proof of Lemma 6
Numerical evidence for the saturation
Theoretical evidence for saturation
E Proof of Lemma 1
Spectrum clipping method
Spectrum flipping method
Spectrum shifting method
F More details about numerical simulations
The construction of the datasets
The protocol of quantum kernels
The hyper-parameter of RBF kernels and the regularization parameter
Numerical simulation results
Findings
More simulation results with fine-grained measurements
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call