Abstract

Extreme Learning Machine (ELM) is a model for the training of single-hidden feedforward networks and is used extensively in problems of classification and regression for its good results and reduced training time compared to backpropagation. Previous work has shown that to define randomly the input weights and biases of the hidden layer reduces quality (accuracy) of ELM. This paper makes a comparative study of three variants of harmony search (the original Harmony Search, Global-best Harmony Search, and New Global Harmony Search), a memetic algorithm from the state of the art denominated M-ELM and a random walk algorithm named RW-ELM on 20 classical classification datasets available in the UCI repository. The results show that the best algorithm for training ELMs is Harmony Search and that the other two variants of this algorithm are better than M-ELM and RW-ELM when cross-validation is used. The experiments were performed at first using separate archives for training and testing, then using cross-validation with 5 folders. Based on literature, authors recommend the use of cross-validation technique because more realistic accuracy results can be obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call