Abstract

An algorithm for iterative learning control is developed on the basis of an optimization principle which has been used previously to derive gradient-type algorithms. The new algorithm has numerous benefits which include realization in terms of Riccati feedback and feedforward components. This realization also has the advantage of implicitly ensuring automatic step size selection and hence guaranteeing convergence without the need for empirical choice of parameters. The algorithm is expressed as a very general norm optimization problem in a Hilbert space setting and hence, in principle, can be used for both continuous and discrete time systems. A basic relationship with almost singular optimal control is outlined. The theoretical results are illustrated by simulation studies which highlight the dependence of the speed of convergence on parameters chosen to represent the norm of the signals appearing in the optimization problem.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.