Abstract

Studies addressing the question "Can a learner complete the learning securely?" have recently been spurred from the standpoints of fundamental theory and potential applications. In the relevant context of this question, we present a classical-quantum hybrid sampling protocol and define a security condition that allows only legitimate learners to prepare a finite set of samples that guarantees the success of the learning; the security condition excludes intruders. We do this by combining our security concept with the bound of the so-called probably approximately correct (PAC) learning. We show that while the lower bound on the learning samples guarantees PAC learning, an upper bound can be derived to rule out adversarial learners. Such a secure learning condition is appealing, because it is defined only by the size of samples required for the successful learning and is independent of the algorithm employed. Notably, the security stems from the fundamental quantum no-broadcasting principle. No such condition can thus occur in any classical regime, where learning samples can be copied. Owing to the hybrid architecture, our scheme also offers a practical advantage for implementation in noisy intermediate-scale quantum devices.

Highlights

  • The hybridization of machine learning and quantum theory has been intensively studied, especially to explore the possibility of exploiting quantum learning speedups

  • In the relevant context of this question, we present a classical-quantum hybrid sampling protocol and define a security condition that allows only legitimate learners to prepare a finite set of samples that guarantees the success of the learning; the security condition excludes intruders

  • We do this by combining our security concept with the bound of the so-called probably approximately correct (PAC) learning

Read more

Summary

INTRODUCTION

The hybridization of machine learning and quantum theory has been intensively studied, especially to explore the possibility of exploiting quantum learning speedups. The main objective of these adversaries is to acquire ability to become equals of the legitimate learner or to render the learning of the legitimate learner counterproductive In this context, one of the open issues is how to define a secure learning condition for detecting and preventing these adversaries. We indicate that the legitimate learning mates can communicate a (classically) encrypted dataset after generating a secret key via a well-established quantum-keydistribution (QKD) scheme In that case, it would be impractical for the adversarial learner(s) to extract critical learning information once the QKD is completed. The adversarial learner(s) may want to spoil the learning by disrupting the communication Such a purpose can be achieved by disrupting the encrypted data after the key is distributed. Such architecture helps avoid the use of a largely superposed sample and is well suited to noisy intermediate-scale quantum (NISQ) technologies [18]

PROBLEM
SECURE SAMPLING PROTOCOL
NO-BROADCASTING OF LEARNING SAMPLES
SECURE PROBABLY-APPROXIMATELY-CORRECT
MULTI-CLASS CLASSIFICATION
REMARKS
A strategy of single-machine approach
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.