Abstract

To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction.

Highlights

  • Quantum computers are sensitive to the effects of noise due to unwanted interactions with the environment

  • The surface code [10, 11] is one of a family of topological codes, and is the basis for an approach to faulttolerant quantum computing for which high thresholds have been reported [12,13,14,15]

  • In order to answer this question, we examine the behavior of the toric code in the presence of uncorrelated bit-flip and phase-flip noise

Read more

Summary

INTRODUCTION

Quantum computers are sensitive to the effects of noise due to unwanted interactions with the environment. The first of these, which we will call the universal scaling hypothesis, extends ideas by Wang et al [30] and uses rescaling arguments based on a mapping to a well-studied model in statistical physics (the 2-dimensional random-bond Ising model, or RBIM) This approach provides a good estimate for Pfail when the error weight (the number of qubits an operator acts on non-trivially) is high and code distance is large. As p decreases there is a point at which finite-size effects begin to dominate and we no longer expect the universal scaling hypothesis to apply This limit corresponds to low physical error rates, as well as small lattices. II we review the toric code and its properties

Background
Error correction
Simulating noise and error correction
THE UNIVERSAL SCALING HYPOTHESIS
Evidence for the universal scaling hypothesis
THE VALIDITY OF THE TWO REGIMES
THE LOW SINGLE QUBIT ERROR RATE REGIME
Testing the Range of Validity of the Universal Scaling Hypothesis
Testing the Range of Validity of the Low Error Rate Regime
COMPARISON OF THE OVERHEAD IN THE TWO REGIMES
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call