Abstract
In this paper, we consider the regularized learning schemes based on l1-regularizer and pinball loss in a data dependent hypothesis space. The target is the error analysis for the quantile regression learning. There is no regularized condition with the kernel function, excepting continuity and boundness. The graph-based semi-supervised algorithm leads to an extra error term called manifold error. Part of new error bounds and convergence rates are exactly derived with the techniques consisting of l1-empirical covering number and boundness decomposition.
Highlights
The classical least-squares regression models have focused mainly on estimating conditional mean functions
We consider the regularized learning schemes based on l1-regularizer and pinball loss in a data dependent hypothesis space
Quantile regression can provide richer information about the conditional distribution of response variables such as stretching or compressing tails, so it is useful in applications when both lower and upper or all quantiles are of interest
Summary
The classical least-squares regression models have focused mainly on estimating conditional mean functions. Quantile regression can provide richer information about the conditional distribution of response variables such as stretching or compressing tails, so it is useful in applications when both lower and upper or all quantiles are of interest. Relative to the least-squares regression, quantile regression estimates are more robust against outliers in the response measurements. We introduce a framework for data-dependent regularization that exploits the geome-. The labeled and unlabeled data learnt from the problem constructs a framework and incorporates the framework as an additional regularization term. The framework exploits the geometry of the probability distribution that generates the data. There are two regularization terms: one controlling the complexity of the classifier in the ambient space and the other controlling the complexity as measured by geometry of the distribution in the intrinsic space
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have