Abstract
Sparse additive models have been successfully applied to high-dimensional data analysis due to the flexibility and interpretability of their representation. However, the existing methods are often formulated using the least-squares loss with learning the conditional mean, which is sensitive to data with the non-Gaussian noises, e.g., skewed noise, heavy-tailed noise, and outliers. To tackle this problem, we propose a new robust regression method, called as sparse modal additive model (SpMAM), by integrating the modal regression metric, the data-dependent hypothesis space, and the weighted lq,1 -norm regularizer (q ≥ 1) into the additive models. Specifically, the modal regression metric assures the model robustness to complex noises via learning the conditional mode, the data-dependent hypothesis space offers the model adaptivity via sample-based presentation, and the lq,1 -norm regularizer addresses the algorithmic interpretability via sparse variable selection. In theory, the proposed SpMAM enjoys statistical guarantees on asymptotic consistency for regression estimation and variable selection simultaneously. Experimental results on both synthetic and real-world benchmark data sets validate the effectiveness and robustness of the proposed model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.