Abstract

The number of measurements demanded by hybrid quantum-classical algorithms such as the variational quantum eigensolver (VQE) is prohibitively high for many problems of practical value. For such problems, realizing quantum advantage will require methods which dramatically reduce this cost. Previous quantum algorithms that reduce the measurement cost (e.g. quantum amplitude and phase estimation) require error rates that are too low for near-term implementation. Here we propose methods that take advantage of the available quantum coherence to maximally enhance the power of sampling on noisy quantum devices, reducing measurement number and runtime compared to the standard sampling method of the variational quantum eigensolver (VQE). Our scheme derives inspiration from quantum metrology, phase estimation, and the more recent "alpha-VQE" proposal, arriving at a general formulation that is robust to error and does not require ancilla qubits. The central object of this method is what we call the "engineered likelihood function" (ELF), used for carrying out Bayesian inference. We show how the ELF formalism enhances the rate of information gain in sampling as the physical hardware transitions from the regime of noisy intermediate-scale quantum computers into that of quantum error corrected ones. This technique speeds up a central component of many quantum algorithms, with applications including chemistry, materials, finance, and beyond. Similar to VQE, we expect small-scale implementations to be realizable on today's quantum devices.

Highlights

  • Which quantum algorithms will deliver practical value first? A recent flurry of methods that cater to the limitations of near-term quantum devices have drawn significant attention

  • III A] it is shown that depolarizing noise reduces but does not eliminate the ability of the likelihood function to distinguish between different possible values of the parameter to be estimated. This motivates the central question of our work: with realistic, noisy quantum computers, how do we maximize information gain from the coherence available to speed up expectation value estimation, and in doing so, speed up algorithms such as the variational quantum eigensolver (VQE) that rely on sampling? We note that this question is urgently relevant in the current era of noisy quantum devices without quantum error correction, but remains relevant for error-corrected quantum computation

  • We investigate how fast RMSEt decreases as t grows for various schemes, including the ancilla-based Chebyshev likelihood function (AB Chebyshev likelihood functions (CLFs)), ancilla-based engineered likelihood function (AB engineered likelihood function” (ELF)), ancilla-free Chebyshev likelihood function (AF CLF), and ancilla-free engineered likelihood function (AF ELF)

Read more

Summary

INTRODUCTION

Which quantum algorithms will deliver practical value first? A recent flurry of methods that cater to the limitations of near-term quantum devices have drawn significant attention. The standard versions of these algorithms lie out of reach for near-term quantum computers, in part due to the coherence requirements needed to implement estimation subroutines Such techniques include quantum algorithms for Monte Carlo estimation [17] and quantum algorithms for solving linear systems of equations [18,19]. III A] it is shown that depolarizing noise reduces but does not eliminate the ability of the likelihood function to distinguish between different possible values of the parameter to be estimated This motivates the central question of our work: with realistic, noisy quantum computers, how do we maximize information gain from the coherence available to speed up expectation value estimation, and in doing so, speed up algorithms such as the VQE that rely on sampling? VII with implications of our results from a broad perspective of quantum computing

Prior work
Main results
A FIRST EXAMPLE
ENGINEERED LIKELIHOOD FUNCTIONS
Quantum circuits for engineered likelihood functions
Bayesian inference with engineered likelihood functions
Efficient maximization of proxies of the variance reduction factor
Maximizing the Fisher information of the likelihood function
Maximizing the slope of the likelihood function
Approximate Bayesian inference with engineered likelihood functions
SIMULATION RESULTS
Experimental details
Comparing the performance of various schemes
Understanding the performance of Bayesian inference with ELFs
Analyzing the impact of layer fidelity on the performance of estimation
Analyzing the impact of circuit depth on the performance of estimation
A MODEL FOR NOISY ALGORITHM PERFORMANCE
OUTLOOK
Limiting behavior of the variance reduction factor
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call