Abstract

The theme of this month’s issue of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">IEEE Control Systems</i> is “Everyone in Control, Everywhere.” The magazine presents two features and one control education article. The first feature is an example-driven tutorial introduction to quantum control. It is the product of a three-year multidisciplinary collaboration between a team of control engineers and a team of quantum scientists: Marco M. Nicotra, Jieqiu Shao, Joshua Combes, Anne Cross Theurkauf, Penina Axelrad, Liang-Ying Chih, Murray Holland, Alex A. Zozulya, Catie K. LeDesma, Kendall Mehling, and Dana Z. Anderson. In the author’s experiences, the greatest challenge one faces when entering the field of quantum control is the language barrier between the two communities. The aim of this article is to lower this barrier by showing how familiar control strategies (that is, Lyapunov, optimal control, and learning) can be applied in the unfamiliar setting of a quantum system (that is, a cloud of trapped, ultracold atoms). Particular emphasis is given to the derivation of the model and the description of its structural properties. Sidebars throughout the article prove a brief overview of the essential notions/notation that are/is required to establish an effective communication channel with quantum physicists and quantum engineers. In essence, this article is a collection of everything that this control team wished they had known at the beginning of the project. They hope that it may be of assistance to members of this community wanting to embark on their first quantum control project. The second feature proposes a model-free deep reinforcement learning strategy for shared control of robot manipulators with obstacle avoidance. It is coauthored by Matteo Rubagotti, Bianca Sangiovanni, Aigerim Nurbayeva, Gian Paolo Incremona, Antonella Ferrara, and Almas Shintemirov. The proposed strategy is tested in simulation and experimentally on a UR5 manipulator, and it is compared with a model predictive control approach. The article shows that deep reinforcement learning exhibits a better performance than model predictive control, but only if the provided reference falls within the distribution of the deep reinforcement learning algorithm policy. Indeed, the model-based nature of model predictive control allows it to address unforeseen situations that are compatible with the process model, while deep reinforcement learning provides poor performance in all the situations not minimally experienced during the training process.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.