Abstract

This article presents the methods of identifying uncertain plants used for stability-certified reinforcement learning (RL). The uncertain plant is the interconnection of a perturbation and a nominal plant represented by a state-space model. By assuming that the perturbation is bounded, two types of methods are proposed to identify the uncertain plant. The first one is an optimization-based approach proposed for known nonlinear systems. The state-space model is given by solving an optimization problem of seeking matrices satisfying the bounded condition. The second method is a learning-based approach proposed for unknown systems. The state-space model is estimated by interacting with environments, which means that the second method is suitable for conjunction with stability-certified RL. In the numerical experiments, the identified uncertain models are used for stability analysis of feedback systems with a neural network controller. The results show that both feedback systems provide enough stability with similar regions of attractions (ROAs). This means that the learning-based method enables us to discuss the stability of neural network controllers even for unknown systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.