Abstract

In this paper, we consider the regularized learning schemes based on l1-regularizer and pinball loss in a data dependent hypothesis space. The target is the error analysis for the quantile regression learning. There is no regularized condition with the kernel function, excepting continuity and boundness. The graph-based semi-supervised algorithm leads to an extra error term called manifold error. Part of new error bounds and convergence rates are exactly derived with the techniques consisting of l1-empirical covering number and boundness decomposition.

Highlights

  • The classical least-squares regression models have focused mainly on estimating conditional mean functions

  • We consider the regularized learning schemes based on l1-regularizer and pinball loss in a data dependent hypothesis space

  • Quantile regression can provide richer information about the conditional distribution of response variables such as stretching or compressing tails, so it is useful in applications when both lower and upper or all quantiles are of interest

Read more

Summary

Introduction

The classical least-squares regression models have focused mainly on estimating conditional mean functions. Quantile regression can provide richer information about the conditional distribution of response variables such as stretching or compressing tails, so it is useful in applications when both lower and upper or all quantiles are of interest. Relative to the least-squares regression, quantile regression estimates are more robust against outliers in the response measurements. We introduce a framework for data-dependent regularization that exploits the geome-. The labeled and unlabeled data learnt from the problem constructs a framework and incorporates the framework as an additional regularization term. The framework exploits the geometry of the probability distribution that generates the data. There are two regularization terms: one controlling the complexity of the classifier in the ambient space and the other controlling the complexity as measured by geometry of the distribution in the intrinsic space

The Model
The Restriction
Error Decomposition
Estimation of the Manifold Error
Estimation of the Hypothesis Error
Estimation of the Sample Error
Total Error Bound
Convergence Radius and Main Result
The Sparsity of the Algorithm
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.