The time-delayed feedback control method, as one of popular methods for chaos control, is noninvasive and flexible for various dynamical systems from different fields of science and technology. In the method, however, an appropriate choice of the feedback gain is challenging, which requires the explicit mathematical model of the controlled system for stability analysis. Additionally, another limitation of the method is the so-called odd number limitation. The two problems restrict its application to practical situations. Fortunately, a technique called deep reinforcement learning is capable of learning the controlled environment (the dynamical system), by continually interacting with the controlled system, which makes it possible to obtain the right feedback gain without the requirement of the accurate mathematical model of the system. Hence, in this work, a time-delayed feedback control method based on deep reinforcement learning is put forward to solve the two problems. Compared with the traditional time-delayed feedback control, the proposed method, as a data-driven method due to the combination with deep reinforcement learning, offers a time-varying feedback gain according to the well-trained policy learned by the deep reinforcement learning algorithm. It maintains the non-invasive property of the time-delayed feedback control, but expands the operating range due to the time-varying feedback gain overcoming the odd number limitation. With numerical simulations, the proposed method is successfully applied to three different kinds of systems, the discrete logistic map, the non-autonomous Duffing oscillator and the autonomous Lorenz system.
Read full abstract