Abstract
Despite the success of neural network (NN) controllers, the stability of closed-loop NN control systems is still a major challenge. This paper proposes a constrained reinforcement learning (RL) algorithm to design optimal NN controllers for a class of nonlinear dynamical systems while guaranteeing the stability of the control systems. By representing a controlled system as a tensor product (TP) model, a sufficient stability condition for a closed-loop NN control system is derived, which is then employed as a training constraint for the NN controller throughout the training process to ensure the stability in each training step. By showing the existence of a Lyapunov function, which is also directly obtained during the training process, the stability can be theoretically confirmed. Simulation studies on various nonlinear dynamical systems illustrate that the NN controllers trained with the proposed constrained RL algorithm outperform linear–quadratic–regulator controllers and yield roughly the same performance index values as those trained with an unconstrained RL algorithm, indicating that the imposed constraint does not significantly affect the training performance. Most importantly, while simulation results show that all controllers provide closed-loop stability, only the NN controllers trained with the proposed constrained RL algorithm theoretically guarantee stability.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.