Abstract

Convergence rates of regression spline estimators have been established for a general framework in statistical modeling. It is well known that qth-order regression splines have optimal rates under mild assumptions. Increasing the number of knots tends to improve the approximation error rate but worsen the estimation error rate, and the optimal rate is attained by setting the two rates to be the same. For splines that are constrained to be monotone or convex, it is straight-forward to show that the constrained estimator attains the optimal rate if the approximation in the spline space also satisfies the constraints. If the monotonicity or convexity of the true regression function holds strictly, then the spline approximation will satisfy the constraints for a fine enough knot mesh. However, if there are intervals over which the constraints do not hold strictly, there is no guarantee that the approximation satisfies the constraints even for large numbers of finely-spaced knots, and therefore convergence rates of constrained regression splines have not been fully established. In this paper, we show that when the true function satisfies the constraints, there is a sufficiently close function in the spline space that also satisfies the constraints, and hence the constrained spline estimator attains the optimal rate of convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call