Abstract
The subject of this article is the modelling of the influence of non-minimum phase discrete-time system dynamics on the performance of norm optimal iterative learning control (NOILC) algorithms with the intent of explaining the observed phenomenon and predicting its primary characteristics. It is established that performance in the presence of one or more non-minimum phase plant zeros typically has two phases. These consist of an initial fast monotonic reduction of the L 2 error norm (mean square error) followed by a very slow asymptotic convergence. Although the norm of the tracking error does eventually converge to zero, the practical implications over a finite number of trials is apparent convergence to a non-zero error. The source of this slow convergence is identified using the singular value distribution of the system's all pass component. A predictive model of the onset of slow convergence behaviour is developed as a set of linear constraints and shown to be valid when the iteration time interval is sufficiently long. The results provide a good prediction of the magnitude of error norm where slow convergence begins. Formulae for this norm and associated error time series are obtained for single-input single-output systems with several non-minimum phase zeros outside the unit circle using Lagrangian techniques. Numerical simulations are given to confirm the validity of the analysis.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.