Abstract

Barren plateau landscapes correspond to gradients that vanish exponentially in the number of qubits. Such landscapes have been demonstrated for variational quantum algorithms and quantum neural networks with either deep circuits or global cost functions. For obvious reasons, it is expected that gradient-based optimizers will be significantly affected by barren plateaus. However, whether or not gradient-free optimizers are impacted is a topic of debate, with some arguing that gradient-free approaches are unaffected by barren plateaus. Here we show that, indeed, gradient-free optimizers do not solve the barren plateau problem. Our main result proves that cost function differences, which are the basis for making decisions in a gradient-free optimization, are exponentially suppressed in a barren plateau. Hence, without exponential precision, gradient-free optimizers will not make progress in the optimization. We numerically confirm this by training in a barren plateau with several gradient-free optimizers (Nelder-Mead, Powell, and COBYLA algorithms), and show that the numbers of shots required in the optimization grows exponentially with the number of qubits.

Highlights

  • Parameterized quantum circuits offer a flexible paradigm for programming Noisy Intermediate Scale Quantum (NISQ) computers

  • We discuss the implications for employing gradient-free optimizers in a barren plateau

  • Barren plateaus in the training landscape remains an obstacle to making these paradigms scalable

Read more

Summary

Introduction

Parameterized quantum circuits offer a flexible paradigm for programming Noisy Intermediate Scale Quantum (NISQ) computers. The exponential suppression of the gradient implies that one would need an exponential precision to make progress in the optimization, causing one’s algorithm to scale exponentially in the number of qubits. We confirm our analytical results with numerical simulations involving several gradient-free optimizers: Nelder-Mead, Powell, and COBYLA. For each of these optimizers, we attempt to train a deep parametrized quantum circuit, corresponding to the barren plateau scenario in Ref. We find that the number of shots (i.e., the amount of statistics) required to begin to train the cost function grows exponentially in the number of qubits This is the same behavior that one sees for gradient-based methods, and is a hallmark of the barren plateau phenomenon

Theoretical Background
Cost function
Gradient-Free Optimizers
Nelder-Mead
Powell’s Method
COBYLA
Barren Plateaus
Exponentially suppressed cost differences
Implications for gradient-free optimizers
Numerical Implementation
Discussion
A Proof of Proposition 1
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call