Sparse additive models have shown competitive performance for high-dimensional variable selection and prediction due to their representation flexibility and interpretability. Despite their theoretical properties have been studied extensively, few works have addressed the robustness for the sparse additive models. In this paper, we employ the robust average top-k (ATk) loss as classification error measure and propose a new sparse algorithm, named ATkgroup sparse additive machine (ATk-GSAM). Besides the robust concern, the ATk-GSAM has well adaptivity by integrating the data dependent hypothesis space and group sparse regularizer together. Generalization error bound is established by the concentration estimate with empirical covering numbers. In particular, our error analysis shows that ATk-GSAM can achieve the learning rate O(n−1/2) under appropriate conditions. We further analyze the robustness of ATk-GSAM via a sample-weighted procedure interpretation, and the theoretical guarantees on grouped variable selection. Experimental evaluations on both simulated and benchmark datasets validate the effectiveness and robustness of the new algorithm.
Read full abstract