Abstract

This paper presents a safe model-based reinforcement learning (MBRL) approach to control nonlinear systems described by linear parameter-varying (LPV) models. A variational Bayesian inference Neural Network (BNN) approach is first employed to learn a state-space model with uncertainty quantification from input-output data collected from the system; the model is then utilised for training MBRL to learn control actions for the system with safety guarantees. Specifically, MBRL employs the BNN model to generate simulation environments for training, which avoids safety violations in the exploration stage. To adapt to dynamically varying environments, knowledge on the evolution of LPV model scheduling variables is incorporated in simulation to reduce the discrepancy between the transition distributions of simulation and real environments. Experiments on a parameter-varying double integrator system and a control moment gyroscope (CMG) simulation model demonstrate that the proposed approach can safely achieve desired control performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call