Abstract

The asymptotic convergence properties of system identification methods are well known, but comparatively little is known about the practical situation where only a finite number of data points are available. In this paper we consider the finite sample properties of prediction error methods for system identification. We consider ARX models and uniformly bounded criterion functions. The problem we pose is: how many data points are required in order to guarantee with high probability that the expected value of the identification criterion is close to its empirical mean value. The sample sizes are obtained using generalisations of risk minimisation theory to weakly dependent processes. We obtain uniform probabilistic bounds on the difference between the expected value of the identification criterion and the empirical value evaluated on the observed data points. The bounds are very general, in particular no assumption is made about the true system belonging to the model class. Further analysis shows that in order to maintain a given bound on the difference, the number of data points required grows at most at a polynomial rate in the model order and in many cases no faster than quadratically. The results obtained here generalise previous results derived for the case where the observed data was independent and identically distributed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.