Abstract

ABSTRACTPiecewise affine systems constitute a popular framework for the approximation of non-linear systems and the modelling of hybrid systems. This paper addresses the recursive subsystem estimation in continuous-time piecewise affine systems. Parameter identifiers are extended from continuous-time state-space models to piecewise linear and piecewise affine systems. The convergence rate of the presented identifiers is improved further using concurrent learning, which makes concurrent use of current and recorded measurements. In concurrent learning, assumptions on persistence of excitation are replaced by the less restrictive linear independence of the recorded data. The introduction of memory, however, reduces the tracking ability of concurrent learning because errors in the recorded measurements prevent convergence to the true parameters. In order to overcome this limitation, an algorithm is proposed to detect and remove erroneous measurements at run-time and thereby restore the tracking ability. Detailed examples are included to validate the proposed methods numerically.

Highlights

  • Piecewise affine (PWA) systems constitute a powerful tool to describe complex dynamical systems

  • This paper extends linear parameter identifiers to switched systems in the form of piecewise linear (PWL) and PWA systems

  • In the case of PWA systems, it is shown that the affine input violates classical assumptions on the richness of input signals

Read more

Summary

Introduction

Piecewise affine (PWA) systems constitute a powerful tool to describe complex dynamical systems. If the input signals in u are sufficiently rich of order n + 1 with distinct frequencies, and such that they cause repeated activation of all subsystems obeying a certain dwell time T0, the update laws AˆCi and BˆCi in Equation (7) cause the estimates Ai, Bi to converge to the real system matrices Ai, Bi and cause the prediction errors ei = xi − x to converge to zero. The update laws (7) suffer three shortcomings: (1) all signals in u must be sufficiently rich of order n + 1 for all time in order to ensure PE; (2) adaptation of subsystem i only takes place while subsystem i is active; (3) convergence is rather slow These shortcomings can be overcome by introducing memory in the form of concurrent learning as shown by Kersting and Buss (2014b) and as we revise . This is exploited in order to further improve the performance of concurrent learning adaptive identification

Populating the history stacks
Maximising the rate of convergence
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.