The additive index models (AIMs) can be viewed as a kind of artificial neural networks based on nonparametric activation or so-called ridge functions. Recently, they are shown to achieve enhanced explainability after incorporating various interpretability constraints. However, the training of AIMs by either the backfitting algorithm or the joint stochastic optimization is known to be very slow for especially high dimensional inputs. In this article, we propose a novel sequential approach based on the celebrated Stein's lemma. The proposed SeqStein method can successfully decouple the training of AIMs into two separable steps, namely, the following: 1) Stein's estimation of the projection indices and 2) nonparametric estimation of ridge functions using the smoothing splines. We show through numerical experiments that the SeqStein algorithm is not only more efficient for training AIMs, but also inclined to produce more interpretable models that have smooth ridge functions with sparse and nearly orthogonal projection indices.
Read full abstract