Abstract

Machine-learning (ML) techniques are emerging as a valuable tool in experimental physics, and among them, reinforcement learning (RL) offers the potential to control high-dimensional, multistage processes in the presence of fluctuating environments. In this experimental work, we apply RL to the preparation of an ultracold quantum gas to realize a consistent and large number of atoms at microkelvin temperatures. This RL agent determines an optimal set of 30 control parameters in a dynamically changing environment that is characterized by 30 sensed parameters. By comparing this method to that of training supervised-learning regression models, as well as to human-driven control schemes, we find that both ML approaches accurately predict the number of cooled atoms and both result in occasional superhuman control schemes. However, only the RL method achieves consistent outcomes, even in the presence of a dynamic environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call