Abstract

Extreme learning machine is a fast learning algorithm for single hidden layer feedforward neural network. However, an improper number of hidden neurons and random parameters have a great effect on the performance of the extreme learning machine. In order to select a suitable number of hidden neurons, this paper proposes a novel hybrid learning based on a two-step process. First, the parameters of hidden layer are adjusted by a self-organized learning algorithm. Next, the weights matrix of the output layer is determined using the Moore–Penrose inverse method. Nine classification datasets are considered to demonstrate the efficiency of the proposed approach compared with original extreme learning machine, Tikhonov regularization optimally pruned extreme learning machine, and backpropagation algorithms. The results show that the proposed method is fast and produces better accuracy and generalization performances.

Highlights

  • (1) We propose a hybrid algorithm combining the selforganizing map algorithm with extreme learning machine algorithm for optimizing single hidden layer feedforward neural network (SLFN) weights

  • In extreme learning machine (ELM), the input weights of the hidden nodes are randomly chosen, and the output weights of SLFN are computed by using the pseudoinverse operation of the hidden layer output matrix. e illustration of single hidden layer feedforward neural network is given in Figure 1. e numbers of neurons for input, hidden, and output layers are n, N􏽥, and m, respectively

  • Self-organizing map (SOM) is used to reduce the dimension of input weights matrix W of ELM from N􏽥 × n to n􏽥 × n

Read more

Summary

Basic ELM Algorithm

An efficient learning algorithm, called extreme learning machine (ELM), for single hidden layer feedforward neural network (SLFN) has been proposed by Huang et al [1]. Βim]T is the weight vector connecting the ith hidden node and the output nodes, bi is the threshold of the ith node, yj [yj[1], yj2, · · · , yjm]T ∈ Rm is the output vector of neural network, and f(.) denotes an activation function, in general, f(x) 1/(1 + e− x). Where H is the output matrix of the hidden layer and defined as follows: H􏼐w1, . E criterion function to be minimized is the sum of the squared errors over all the training samples, given by. Calculate the hidden layer output matrix H using equation (4).

Proposed Learning Algorithm
Stage 1
Stage 2
Simulation Results
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.