Quantum computing devices are inevitably subject to errors. To leverage quantum technologies for computational benefits in practical applications, quantum algorithms and protocols must be implemented reliably under noise and imperfections. Since noise and imperfections limit the size of quantum circuits that can be realized on a quantum device, developing quantum error mitigation techniques that do not require extra qubits and gates is of critical importance. In this work, we present a deep learning-based protocol for reducing readout errors on quantum hardware. Our technique is based on training an artificial neural network (NN) with the measurement results obtained from experiments with simple quantum circuits consisting of singe-qubit gates only. With the NN and deep learning, non-linear noise can be corrected, which is not possible with the existing linear inversion methods. The advantage of our method against the existing methods is demonstrated through quantum readout error mitigation experiments performed on IBM five-qubit quantum devices.
Read full abstract