Abstract

Single-hidden layer feedforward networks (SLFNs) are always viewed as classical methods for binary classification and regression. There are several variant types of SLFNs, such as support vector machines (SVM), extreme learning machines (ELM), etc. It is an open problem for SLFNs to obtain a powerful feature mapper with a simple network structure. In this paper, we propose a framework called sparse and heuristic SVM (SH-SVM) to fuse different SLFNs from the aspect of feature mapping to obtain powerful feature mapping capability and improve the generalization performance. By fusing different SLFNs, the SH-SVM benefits from the learning capabilities of each model. As an example, the fusion of SVM and ELM is studied in detail. Then with the sparse representation method, a compact SLFN is obtained and the most powerful hidden nodes are selected. Furthermore, an efficient method for solving the sparse representation problem in SH-SVM is proposed. Experiments on 25 data sets with eight methods show that SH-SVM has satisfactory results with a compact network structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call