Abstract

Extreme learning machine (ELM) has demonstrated great potential in machine learning owing to its simplicity, rapidity and good generalization performance. In this investigation, based on least-squares estimate (LSE) and least absolute deviation (LAD), we propose four sparse ELM formulations with zero-norm regularization to automatically choose the optimal hidden nodes. Furthermore, we develop two continuous optimization methods to solve the proposed problems respectively. The first is DC (difference of convex functions) approximation approach that approximates the zero-norm by a DC function, and the resulting optimizations are posed as DC programs. The second is an exact penalty technique for zero-norm, and the resulting problems are reformulated as DC programs, and the corresponding DCAs converge finitely. Moreover, the proposed framework is applied directly to recognize the hardness of licorice seeds using near-infrared spectral data. Experiments in different spectral regions illustrate that the proposed approaches can reduce the number of hidden nodes (or output features), while either improve or show no significant difference in generalization compared with the traditional ELM methods and support vector machine (SVM). Experiments on several benchmark data sets demonstrate that the proposed framework is competitive with the traditional approaches in generalization, but selects fewer output features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call