Abstract

We consider the standard additive regression model consisting of two components f0 and g0. The first component f0 is assumed to be in some sense “smoother” than the second g0. It is known that in that case one can construct estimators that estimate the smoother component f0 with fast minimax rate as if the non-smooth component g0 were known. Our contribution shows that this phenomenon also occurs when one uses the penalized least squares estimator (fˆ,gˆ) of (f0,g0). We describe smoothness in terms of a semi-norm on the class of regression functions. This covers the case of Sobolev smoothness where the penalized estimator is a standard spline estimator. The theory is illustrated by a simulation study. Our proofs rely on recent results from empirical process theory.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.