Abstract

Extreme learning machine (ELM) is a feedforward neural network with one hidden layer, which is similar to a multilayer perceptron (MLP). To reduce the complexity in the training process of MLP using the traditional backpropagation algorithm, the weights in ELM between input and hidden layers are random variables. The output layer in the ELM is linear, as in a radial basis function neural network (RBFNN), so the output weights can be easily estimated with a least squares solution. It has been demonstrated in our previous work that the computational cost of ELM is much lower than the standard support vector machine (SVM), and a kernel version of ELM can offer comparable performance as SVM. In our previous work, we also investigate the impact of the number of hidden neurons to the performance of ELM. Basically, more hidden neurons are needed if the number of training samples and data dimensionality are large, which results in a very large matrix inversion problem. To avoid handling such a large matrix, we propose to conduct band selection to reduce data dimensionality (i.e., the number of input neurons), thereby reducing network complexity. Experimental results show that ELM using selected bands can yield similar or even better classification accuracy than using all the original bands.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.