Abstract

The current quantum reinforcement learning control models often assume that the quantum states are known a priori for control optimization. However, full observation of the quantum state is experimentally infeasible due to the exponential scaling of the number of required quantum measurements on the number of qubits. In this paper, we investigate a robust reinforcement learning method using partial observations to overcome this difficulty. This control scheme is compatible with near-term quantum devices, where the noise is prevalent and predetermining the dynamics of the quantum state is practically impossible. We show that this simplified control scheme can achieve similar or even better performance when compared to the conventional methods relying on full observation. We demonstrate the effectiveness of this scheme on examples of quantum state control and the quantum approximate optimization algorithm (QAOA). It has been shown that high-fidelity state control can be achieved even if the noise amplitude is at the same level as the control amplitude. Besides, an acceptable level of optimization accuracy can be achieved for a QAOA with a noisy control Hamiltonian. This robust control optimization model can be trained to compensate for the uncertainties in practical quantum computing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call