Abstract

We still do not have perfect decoders for topological codes that can satisfy all needs of different experimental setups. Recently, a few neural network based decoders have been studied, with the motivation that they can adapt to a wide range of noise models, and can easily run on dedicated chips without a full-fledged computer. The later feature might lead to fast speed and the ability to operate at low temperatures. However, a question which has not been addressed in previous works is whether neural network decoders can handle 2D topological codes with large distances. In this work, we provide a positive answer for the toric code \cite{Kitaev2003Faulttolerantanyon}. The structure of our neural network decoder is inspired by the renormalization group decoder \cite{duclos2010fast, duclos2013fault}. With a fairly strict policy on training time, when the bit-flip error rate is lower than9%and syndrome extraction is perfect, the neural network decoder performs better when code distance increases. With a less strict policy, we find it is not hard for the neural decoder to achieve a performance close to the minimum-weight perfect matching algorithm. The numerical simulation is done up to code distanced=64. Last but not least, we describe and analyze a few failed approaches. They guide us to the final design of our neural decoder, but also serve as a caution when we gauge the versatility of stock deep neural networks. The source code of our neural decoder can be found at \cite{sourcecodegithub}.

Highlights

  • Before we can make the components of quantum computers as reliable as those of classical computers, we will need quantum error correction so that we can scale up the computation

  • As we are getting closer to the point where small size surface code can be implemented in the lab, it is desirable that the decoders can adapt to the noise models from the experimental setups

  • The renormalization group (RG) decoder [2, 3] for toric code already has a structure very similar to the convolutional neural networks (CNNs) used in image classification

Read more

Summary

Introduction

Before we can make the components of quantum computers as reliable as those of classical computers, we will need quantum error correction so that we can scale up the computation. The renormalization group (RG) decoder [2, 3] for toric code already has a structure very similar to the CNNs used in image classification Both of them try to keep the information needed for the output intact while reducing the size of the lattice, by alternating between local computation and coarsegrain steps. We compare our neural decoder to the minimum-weight perfect matching algorithm, and show in Appendix E that our neural decoder can improve itself when trained on different error model We hope these pieces of information together can give us a first impression of neural decoders on toric code. The knowledge we gained can help us design neural decoders for other large codes

Toric Code
Renormalization Group Decoder
Dense Layers output
Design and Training of the Neural nets
Design of the network
Training
Numerical results
Discussion
A Implementation of Belief Propagation Algorithm
B Introduction to Neural Networks
C Comparison to Simpler Approachers
D Technical Details
Findings
E Spatially Varying Error Rates
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call