Abstract
Motivated by estimation of quantum noise models, we study the problem of learning a Pauli channel, or more generally the Pauli error rates of an arbitrary channel. By employing a novel reduction to the "Population Recovery" problem, we give an extremely simple algorithm that learns the Pauli error rates of an n-qubit channel to precision ϵ in ℓ∞ using just O(1/ϵ2)log⁡(n/ϵ) applications of the channel. This is optimal up to the logarithmic factors. Our algorithm uses only unentangled state preparation and measurements, and the post-measurement classical runtime is just an O(1/ϵ) factor larger than the measurement data size. It is also impervious to a limited model of measurement noise where heralded measurement failures occur independently with probability ≤1/4.We then consider the case where the noise channel is close to the identity, meaning that the no-error outcome occurs with probability 1−η. In the regime of small η we extend our algorithm to achieve multiplicative precision 1±ϵ (i.e., additive precision ϵη) using just O(1ϵ2η)log⁡(n/ϵ) applications of the channel.
Highlights
A major challenge in the analysis of engineered quantum systems is estimating and modeling noise
Additional motivation for the Pauli channel model comes from the practical technique of randomized compiling [18, 29], which converts a general noise channel Λ to a Pauli channel ΛP having the same process fidelity as the original channel
We refer to the p(C) values for ΛP as the “Pauli error rates” of the original general channel Λ
Summary
A major challenge in the analysis of engineered quantum systems is estimating and modeling noise. The goal is to estimate the large Pauli error rates of an unknown channel by preparing states, passing them through the channel, and measuring them. We give very simple and efficient algorithms for learning all of the large Pauli error rates of an n-qubit channel. There is a learning algorithm that, given parameters 0 < δ, < 1, as well as access to an n-qubit channel with Pauli error rates p, has the following properties:. We assume that the failures are independent, and that the algorithm may know the parameter ν (thanks to prior estimation) In this case, we will see that it is almost automatic to obtain the following extension: Theorem continues to hold for any any constant ν.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.