Abstract
In the supervised learning, the Nyström type subsampling is considered as a tool for reducing the computational complexity of regularized kernel methods in the big data setting. Up to now, the theoretical analysis of this approach has been done almost exclusively in the context of the regression learning and in the case where the smoothness of the target functions is restricted to the Hölder type source conditions. Such conditions do not cover the case of target functions with high and low smoothness, which are also of practical interest. Moreover, in the case of the Hölder source conditions, there is no need to consider a regularization with high enough qualification because order-optimal learning rates are achieved by the simple Tikhonov regularization known also as the kernel ridge regression. At the same time, this learning method does not improve its performance for any smoothness higher than Hölder ones. Therefore, in this paper, our goal is to extend previous analysis of the Nyström type subsampling to the case of the general source conditions, and to the regularization schemes with high enough qualification. We also show that under rather natural assumption, our results can be easily reformulated in the ranking learning setting.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.