Abstract

Application-level benchmarks measure how well a quantum device performs meaningful calculations. In the case of parameterized circuit training, the computational task is the preparation of a target quantum state via optimization over a loss landscape. This is complicated by various sources of noise, fixed hardware connectivity, and generative modeling, the choice of target distribution. Gradient-based training has become a useful benchmarking task for noisy intermediate-scale quantum computers because of the additional requirement that the optimization step uses the quantum device to estimate the loss function gradient. In this work, we use gradient-based data-driven circuit learning to qualitatively evaluate the performance of several superconducting platform devices and present results that show how error mitigation can improve the training of quantum circuit Born machines with 28 tunable parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call