Abstract

Quantum many-body control is a central milestone en route to harnessing quantum technologies. However, the exponential growth of the Hilbert space dimension with the number of qubits makes it challenging to classically simulate quantum many-body systems and, consequently, to devise reliable and robust optimal control protocols. Here we present a framework for efficiently controlling quantum many-body systems based on reinforcement learning (RL). We tackle the quantum-control problem by leveraging matrix product states (1) for representing the many-body state and (2) as part of the trainable machine learning architecture for our RL agent. The framework is applied to prepare ground states of the quantum Ising chain, including states in the critical region. It allows us to control systems far larger than neural-network-only architectures permit, while retaining the advantages of deep learning algorithms, such as generalizability and trainable robustness to noise. In particular, we demonstrate that RL agents are capable of finding universal controls, of learning how to optimally steer previously unseen many-body states and of adapting control protocols on the fly when the quantum dynamics is subject to stochastic perturbations. Furthermore, we map our RL framework to a hybrid quantum–classical algorithm that can be performed on noisy intermediate-scale quantum devices and test it under the presence of experimentally relevant sources of noise.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call