Abstract

Reproducing kernel Hilbert spaces (RKHSs) provide a natural framework for data modelling and have been applied to signal processing, control, machine learning and function approximation. A significant problem with models derived from RKHS is that the estimation scales poorly with the number of data. This is due to the need to invert a matrix of size equal to the number of data. Among the methods proposed to overcome this are gradient-based iterative techniques such as steepest descent and conjugate gradient that avoid direct matrix inversions. In this study the authors explore the use of gradient methods for estimating RKHS models from data. It is possible to apply the gradient iteration in function space and subsequently parameterise the algorithm or, alternatively, apply the gradient iteration directly to a parameterised version of the function approximation problem. The main contribution of this study is to demonstrate that the order in which the model is parameterised affects the rate of convergence of gradient-based iterative solution algorithms. The authors also provide conditions for which parameterisation to use in practise. Criteria for selecting the best approach, functional or parametric, are given and results demonstrating the different convergence rates are presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.