Abstract
Learning with coefficient-based regularization has attracted a considerable amount of attention in recent years, on both theoretical analysis and applications. In this paper, we study coefficient-based learning scheme (CBLS) for regression problem with lq-regularizer (1 < q ⩽ 2). Our analysis is conducted under more general conditions, and particularly the kernel function is not necessarily positive definite. This paper applies concentration inequality with l2-empirical covering numbers to present an elaborate capacity dependence analysis for CBLS, which yields sharper estimates than existing bounds. Moreover, we estimate the regularization error to support our assumptions in error analysis, also provide an illustrative example to further verify the theoretical results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.