Abstract
This paper proposes the combination of two model-free controller tuning techniques, namely linear Virtual Reference Feedback Tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first applied to find the stabilizing feedback controller using only input-output (IO) experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected with a perturbed stabilizing VRFT feedback controller in closed-loop, to ensure good exploration of the state-action space and to avoid data collection under non-stabilizing control. The Q-learning controller is then learned from the input-state data using a batch neural fitted framework. The mixed VRFT-Q learning approach is validated on a case study that deals with the position control of a two-degrees-of-motion open-loop stable Multi Input-Multi Output aerodynamic system. Experimental results show that the Q-learning controllers lead to improved control performance over the initial VRFT controllers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.