Abstract
Extreme learning machine (ELM) has recently attracted many researchers’ interest due to its very fast learning speed, and ease of implementation. Its many applications, such as regression, binary and multiclass classification, acquired better results. However, when some attributes of the dataset have been lost, this fixed network structure will be less than satisfactory. This article suggests a Scalable Real-Time Attributes Responsive Extreme Learning Machine (Star-ELM), which can grow its appropriate structure with nodes autonomous coevolution based on the different dataset. Its hidden nodes can be merged to more effectively adjust structure and weight. In the experiments of classical datasets we compare with other relevant variants of ELM, Star-ELM makes better performance on classification learning with loss of dataset attributes in some situations.
Highlights
Extreme learning machine (ELM) [1] is derived from the singlehidden-layer feed-forward neural network (SLFN) design, proposed by Huang et al In this algorithm, the input weights and biases of the hidden layer are randomly generated and need not be adjusted
In order to improve the effective learning in the condition of the lack of dataset attributes, this paper proposed a network, named Scalable Real-Time Attributes Responsive Extreme Learning Machine (Star-ELM)
The results of the classification of the three datasets using ELM, S-ELMs and StarELM are listed in the table, respectively
Summary
Extreme learning machine (ELM) [1] is derived from the singlehidden-layer feed-forward neural network (SLFN) design, proposed by Huang et al In this algorithm, the input weights and biases of the hidden layer are randomly generated and need not be adjusted. To the generalization ability of NN, pruned-extreme learning machine(P-ELM) [7] and optimally pruned-extreme learning machine(OP-ELM) [8] use prune means in the model and get a satisfactory result, while incremental extreme learning machine (I-ELM) [9], error minimized extreme learning machine (EM-ELM) [10] and constructive hidden nodes selection for ELM (CS-ELM) [11] explore incremental constructive feed-forward networks with random hidden nodes to minimize their error. These two improvements are based on the single-layer NN
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Computational Intelligence Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.