Abstract

The correntropy-induced loss (C-loss) has been employed in learning algorithms to improve their robustness to non-Gaussian noise and outliers recently. Despite its success on robust learning, only little work has been done to study the generalization performance of regularized regression with the C-loss. To enrich this theme, this paper investigates a kernel-based regression algorithm with the C-loss and ℓ1-regularizer in data dependent hypothesis spaces. The asymptotic learning rate is established for the proposed algorithm in terms of novel error decomposition and capacity-based analysis technique. The sparsity characterization of the derived predictor is studied theoretically. Empirical evaluations demonstrate its advantages over the related approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.