Abstract

Additive models have been widely used as a flexible nonparametric regression method that can overcome the curse of dimensionality. Using sparsity-inducing penalty for variable selection, several methods are developed for fitting additive models when the number of predictors is very large, sometimes even larger than the sample size. However, despite good asymptotic properties, the finite sample performance of these methods may deteriorate considerably when the number of relevant predictors becomes moderately large. This article proposes a new method that reduces the number of unknown functions to be nonparametrically estimated through learning a predictive subspace representation shared by the additive component functions. The subspace learning is integrated with sparsity-inducing penalization in a penalized least squares formulation and an efficient algorithm is developed for computation involving Stiefel matrix manifold optimization and proximal thresholding operators on matrices. Theoretical convergence properties of the algorithm are studied. The proposed method is shown to be competitive with existing methods in simulation studies and a real data example. Supplementary materials for this article are available online.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call