Abstract

We quantify the role of scrambling in quantum machine learning. We characterize a quantum neural network’s (QNNs) error in terms of the network’s scrambling properties via the out-of-time-ordered correlator (OTOC). A network can be trained by minimizing a loss function. We show that the loss function can be bounded by the OTOC. We prove that the gradient of the loss function can be bounded by the gradient of the OTOC. This demonstrates that the OTOC landscape regulates the trainability of a QNN. We show numerically that this landscape is flat for maximally scrambling QNNs, which can pose a challenge to training. Our results pave the way for the exploration of quantum chaos in quantum neural networks.

Highlights

  • A quantum neural network (QNN) [1–6] is a quantum generalization of a classical neural network [7–9] used to learn or optimize functions

  • We show that when the QNN is maximally scrambling, the of-time-ordered correlator (OTOC) landscape is flat, which can pose a challenge to training

  • We have shown that training error is bounded by the OTOC, a scrambling measure

Read more

Summary

Introduction

A quantum neural network (QNN) [1–6] is a quantum generalization of a classical neural network [7–9] used to learn or optimize functions. We relate chaos to QNNs by establishing upper and lower bounds on training error in terms of quantum scrambling. QNNs themselves may have chaotic properties which characterize their learning ability These properties have been investigated through scrambling measures such as the tripartite mutual information [28] and operator size [29]. Numerical evidence correlating the tripartite mutual information to the network’s empirical training error has been demonstrated in [28]. We demonstrate that training error can be bounded by the out-of-time-ordered correlator (OTOC), defined . This correlator is an essential tool in the study of chaos, as it can characterize fast scramblers [30–34] and has even been used to decode the Hayden-Preskill protocol [35, 36]. We show that when the QNN is maximally scrambling, the OTOC landscape is flat, which can pose a challenge to training

Background on scrambling
Preliminaries
Main results
Error bounds In this subsection, we bound the true error and the loss function using OTOCs
Gradient of loss function
Numerical simulations
Discussion
A Training with cost functions
B Supplementary numerical simulations
C Generalizing true error
E Properties of the twirling channel
F Calculus identity
Computing the loss function
Computing true error
H Proof of Corollary 1
Proof of Proposition 2
Lipschitz constant
Maximally scrambling unitaries
Vanishing gradient
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call