Abstract
Single-hidden layer feedforward networks (SLFNs) are always viewed as classical methods for binary classification and regression. There are several variant types of SLFNs, such as support vector machines (SVM), extreme learning machines (ELM), etc. It is an open problem for SLFNs to obtain a powerful feature mapper with a simple network structure. In this paper, we propose a framework called sparse and heuristic SVM (SH-SVM) to fuse different SLFNs from the aspect of feature mapping to obtain powerful feature mapping capability and improve the generalization performance. By fusing different SLFNs, the SH-SVM benefits from the learning capabilities of each model. As an example, the fusion of SVM and ELM is studied in detail. Then with the sparse representation method, a compact SLFN is obtained and the most powerful hidden nodes are selected. Furthermore, an efficient method for solving the sparse representation problem in SH-SVM is proposed. Experiments on 25 data sets with eight methods show that SH-SVM has satisfactory results with a compact network structure.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Machine Learning and Cybernetics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.