Abstract

This paper compares a linear Virtual Reference Feedback Tuning model-free technique applied to feedback controller tuning based on input-output data with two Reinforcement Q-learning model-free nonlinear state feedback controllers that are tuned using input-state experimental data (ED) in terms of two separate learning techniques. The tuning of the state feedback controllers is done in a model reference setting that aims at linearizing the control system (CS) in a wide operating range. The two learning techniques are validated on a position control case study for an open-loop stable aerodynamic system. The performance comparison of our tuning techniques is discussed in terms of their structural complexity, CS performance, and amount of ED needed for learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call