Abstract

The subject of this article is the modelling of the influence of non-minimum phase discrete-time system dynamics on the performance of norm optimal iterative learning control (NOILC) algorithms with the intent of explaining the observed phenomenon and predicting its primary characteristics. It is established that performance in the presence of one or more non-minimum phase plant zeros typically has two phases. These consist of an initial fast monotonic reduction of the L 2 error norm (mean square error) followed by a very slow asymptotic convergence. Although the norm of the tracking error does eventually converge to zero, the practical implications over a finite number of trials is apparent convergence to a non-zero error. The source of this slow convergence is identified using the singular value distribution of the system's all pass component. A predictive model of the onset of slow convergence behaviour is developed as a set of linear constraints and shown to be valid when the iteration time interval is sufficiently long. The results provide a good prediction of the magnitude of error norm where slow convergence begins. Formulae for this norm and associated error time series are obtained for single-input single-output systems with several non-minimum phase zeros outside the unit circle using Lagrangian techniques. Numerical simulations are given to confirm the validity of the analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call