AbstractAs a commonly used nonparametric model to overcome the curse of dimensionality, the additive model continually attracts attentions of researchers. Our recent work (He et al., 2022) proposed to reduce the number of unknown functions to be estimated through learning an adaptive subspace shared by the additive component functions. Equipped with an efficient algorithm, our proposed reduced additive model outperforms the state‐of‐the‐art alternatives in numerical studies. However, the asymptotic properties of the proposed estimators have not been explored and the empirical findings are short of theoretical explanation. In this work, we fill in the theoretical gap by showing the resulting estimator has faster convergence rate than the estimation without subspace learning; and this is true even when the reduced additive model is only an approximation, provided that the subspace approximation error is small. Moreover, the proposed method is able to consistently identify the relevant predictors. The developed theoretical results back up the earlier empirical findings.
Read full abstract