Abstract

This paper investigates an iterative learning controller for linear discrete-time systems with state delay based on two-dimensional (2-D) system theory. It shall be shown that a 2-D linear discrete Roessor's model can be applied to describe the ILC process of linear discrete time-delay systems. Much less restrictive conditions for the convergence of the proposed learning rules are derived. A learning algorithm is presented which provides much more effective learning of control input, which enables us to obtain a control input to drive the system output to the desired trajectory quickly. Numerical examples are included to illustrate the performance of the proposed control procedures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call