Abstract

Recently, two parsimonious algorithms were proposed to sparsify extreme learning machine (ELM), i.e., constructive parsimonious ELM (CP-ELM) and destructive parsimonious ELM (DP-ELM). In this paper, the ideas behind CP-ELM and DP-ELM are extended to the regularized ELM (RELM), thus obtaining CP-RELM and DP-RELM. For CP-RELM(DP-RELM), there are two schemes to realize it, viz. CP-RELM-I and CP-RELM-II(DP-RELM-I and DP-RELM-II). Generally speaking, CP-RELM-II(DP-RELM-II) outperforms CP-RELM-I(DP-RELM-I) in terms of parsimoniousness. Under nearly the same generalization, compared with CP-ELM(DP-ELM), CP-RELM-II(DP-RELM-II) usually needs fewer hidden nodes. In addition, different from CP-ELM and DP-ELM, for CP-RELM and DP-RELM the number of candidate hidden nodes may be larger than the number of training samples, which assists the selection of much better hidden nodes for constructing more compact networks. Finally, eleven benchmark data sets divided into two groups are utilized to do experiments and the usefulness of the proposed algorithms is reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call