Abstract

This article proposes an off-policy risk-sensitive reinforcement learning (RL)-based control framework to jointly optimize the task performance and constraint satisfaction in a disturbed environment. The risk-aware value function, constructed using the pseudo control and risk-sensitive input and state penalty terms, is introduced to convert the original constrained robust stabilization problem into an equivalent unconstrained optimal control problem. Then, an off-policy RL algorithm is developed to learn the approximate solution to the risk-aware value function. During the learning process, the associated approximate optimal control policy is able to satisfy both input and state constraints under disturbances. By replaying experience data to the off-policy weight update law of the critic neural network, the weight convergence is guaranteed. Moreover, online and offline algorithms are developed to serve as principled ways to record informative experience data to achieve a sufficient excitation required for the weight convergence. The proofs of system stability and weight convergence are provided. The Simulation results reveal the validity of the proposed control framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.