Abstract

Iterative Feedback Tuning (IFT) is a direct, data-driven control technique, that relies on a reference model to capture the desired behavior of the unknown system. The choice of this hyper-parameter is particularly critical, as it potentially jeopardizes performance and even closed-loop stability. This paper aims to explore the suitability of three search methods (grid search, random search, and successive halving) to automatically tune the reference model from data based on a set of user-defined, soft specifications on the desired closed-loop behavior. To compare the three methods and demonstrate their effectiveness, we consider a benchmark simulation case study on the control of a mass-spring-damper system. From our results, successive halving turns out to be the most efficient way to run IFT with automatic reference model selection on a finite budget of time for data collection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.