Abstract

Extreme learning machine (ELM) is considered as a powerful data-driven modeling method and has been widely used to various practical fields. It relies on the assumption that samples are completely clean without noise or worst yet. However, this is often not the case in the real-world applications, and results in poor robustness. In this paper, we focus on addressing a key issue of inefficiency in ELM when confronting with outliers. Introducing the non-convex loss function, we propose a robust regularized extreme learning machine for regression by difference of convex functions (DC) program, denoted as RRELM. The proposed non-convex loss function sets a constant penalty on any large outliers to suppress their negative effects, and can be decomposed into the difference of two convex functions. The RRELM can be successfully solved by DC optimization. Numerical experiments were conducted on various datasets to examine the validity of RRELM. Each experiment was randomly contaminated with 0%, 10%, 20%, 30% and 40% outliers levels in the training samples. We also applied RRELM to the financial time series datasets prediction. The experimental results verify that the proposed RRELM can yield superior generalization performance. Moreover, it is less affected with the increasing proportions of outliers than the competing method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.