Abstract

This paper proposes a novel artificial neural network called sparse-Bayesian--based fast learning network (SBFLN). In SBFLN, sparse Bayesian regression is used to train the fast learning network (FLN), which is an improved extreme learning machine (ELM). The training process of SBFLN is to randomly generate the input weights and the hidden layer biases, and then find the probability distribution of other weights by the sparse Bayesian approach. SBFLN calculates the predicted output through Bayes estimator, so it can provide a natural marginal possibility for classification problems and can solve the overfitting problem caused by the least-squares estimation in FLN. In addition, the sparse Bayesian approach can automatically trim most redundant neurons in hidden layer, which makes the network more compact and accurate. To verify the effectiveness of the improvements in this paper, the results of SBFLN are evaluated in 15 benchmark classification problems. The experimental results show that SBFLN is not sensitive to the number of neurons in the hidden layer, and the performance of SBFLN is competitive or superior to some other state-of-the-art algorithms.

Highlights

  • Artificial neural networks (ANNs) have been widely used in industrial, financial, and natural fields due to their ability to obtain potential nonlinear mappings from data [1,2,3]

  • The sparse-Bayesian–based fast learning network (SBFLN) is compared with fast learning network (FLN) [7], kernel Extreme learning machine (ELM) (KELM) [16], TROP-ELM [18], adaptive elastic ELM (AEELM) [17], relevance support vector machine (RVM) [20], and sparse Bayesian ELM (SBELM) [22] to verify the performance of various problems of the algorithm

  • The best results of all binary classification datasets belong to SBELM, SBFLN, or RVM, indicating that the Bayesian method is more effective in binary classification problem

Read more

Summary

Introduction

Artificial neural networks (ANNs) have been widely used in industrial, financial, and natural fields due to their ability to obtain potential nonlinear mappings from data [1,2,3]. In ELM, the input weights are randomly assigned, and the output weights are calculated by the least squares This method overcomes the slow learning process of the ANNs and the local minimum problem [6]. FLN has the nonlinear approximation capability like general ANNs, and reflects a linear mapping between input and output This combination allows FLN to achieve better accuracy, generalization performance, and stability with the same hidden neurons [8]. With original ELM, FLN transforms network training into solving linear least-squares problems, and computes the output weights through the Moore-. Miche [15] proposes an optimally pruned ELM called OP-ELM, which introduces L1 norm penalty into the training objective function and uses the minimum angle regression training method to obtain sparse output weights.

Sparse Bayesian learning for FLN
W TAW 2
Experiments and evaluation
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.