Abstract

Extreme learning machine (ELM) has been put forward for single hidden layer feedforward networks. Because of its powerful modeling ability and it needs less human intervention, the ELM algorithm has been used widely in both regression and classification experiments. However, in order to achieve required accuracy, it needs many more hidden nodes than is typically needed by the conventional neural networks. This paper considers a new efficient learning algorithm for ELM with smoothing L0 regularization. A novel algorithm updates weights in the direction along which the overall square error is reduced the most and then this new algorithm can sparse network structure very efficiently. The numerical experiments show that the ELM algorithm with smoothing L0 regularization has less hidden nodes but better generalization performance than original ELM and ELM with L1 regularization algorithms.

Highlights

  • The studies and applications of artificial intelligence have raised a new high tide

  • In order to substantiate the reliability of the introduced ELM algorithm with smoothing L0 regularization (ELMSL0) algorithm, we conduct some experiments which are both regression and classification applications

  • It is obvious that the ELMSL0 algorithm has better prediction performance than other two algorithms

Read more

Summary

Introduction

The studies and applications of artificial intelligence have raised a new high tide. Other than the conventional neural network, ELM is a new type of SLFNs, which input weights and the thresholds of the hidden layer can be discretionarily assigned if the activation function of the hidden layer is immortally differentiable. In this paper, based on the regularization method, we propose a new efficient algorithm to train ELM. (1) It is shown how the smoothing L0 regularization is proposed to train ELM, which can discriminate important weights from unnecessary weights and drives the unnecessary weights to zeros, effectively simplified the structure of the network. E ELM algorithm randomly generates the connection weights of the input layer and hidden layer, and the thresholds of the hidden layer neurons need not be adjusted in the process of training the network.

Description of Sparsity
Simulation Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call