Abstract

AbstractDeep reinforcement learning (DRL) algorithms are suitable for modeling and controlling complex systems. Methods for controlling chaos, a difficult task, require improvement. In this article, we present a DRL‐based control method that can control a nonlinear chaotic system without any prior knowledge of the system's equations. We use proximal policy optimization (PPO) to train an agent. The environment is a Lorenz chaotic system, and our goal is to stabilize this chaotic system as quickly as possible and minimize the error by adding extra control terms to the chaotic system. Therefore, the reward function accounts for the total triaxial error. The experimental results demonstrated that the trained agent can rapidly suppress chaos in the system, regardless of the system's random initial conditions. A comprehensive comparison of different DRL algorithms indicated that PPO is the most efficient and effective algorithm for controlling the chaotic system. Moreover, different maximum control forces were applied to determine the relationship between the control forces and controller performance. To verify the robustness of the controller, random disturbances were introduced during training and testing, and the empirical results indicated that the agent trained with random noise performed better. The chaotic system has highly nonlinear characteristics and is extremely sensitive to initial conditions, and DRL is suitable for modeling such systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call