Abstract

Iterative learning control (ILC) is a high-performance control design method for systems operating in a repetitive fashion by learning from past experience. Our recent work shows that reinforcement learning (RL) shares many features with ILC and thus opens the door to new ILC algorithm designs. This paper continues the research by considering a parameter optimal iterative learning control (POILC) algorithm. It has a very simple structure and appealing convergence properties, but requires a model of the system. We first develop a data-driven POILC algorithm without using model information by performing an extra experiment on the plant. We then use a policy gradient RL algorithm to design a new model-free POILC algorithm. Both algorithms achieve the high-performance control target without using model information, but the convergence properties do differ. In particular, by increasing the number of function approximators in the latter, the RL-based model-free ILC can approach the performance of the model-based POILC. A numerical study is presented to compare the performance of different approaches and demonstrate the effectiveness of the proposed designs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call