Abstract

In extreme learning machine (ELM), a large number of hidden nodes are required due to the randomly generated hidden layer. To improve network compactness, the ELM with smoothed l0 regularizer (ELM-SL0 for short) is studied in this paper. Firstly, the l0 regularization penalty term is introduced into the conventional error function, such that the unimportant output weights are gradually forced to zeros. Secondly, the batch gradient method and the smoothed l0 regularizer are combined for training and pruning ELM. Furthermore, both the weak convergence and strong convergence of ELM-SL0 are investigated. Compared with other existing ELMs, the proposed algorithm obtains better performance in terms of estimation accuracy and network sparsity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call