Abstract

This article is devoted to spline adaptation to unknown degree and type of inhomogeneous smoothness over large function classes in d-dimensional nonparametric regression. In particular, we study (1) the selection bias introduced by using the unbiased risk estimator for knot selection and (2) convergence properties of an adaptive regression spline (with variable multiple knots) estimator. The selection bias is shown to be of much smaller order of the ideal loss (risk), which, in turn, leads to the optimal rate of convergence of the adaptive spline estimator over large Besov classes. This implies that the regression spline estimator indeed shares optimal properties with the wavelet shrinkage and local kernel estimators.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call