Abstract

Extreme Learning Machine (ELM) is a special single-hidden-layer feedforward neural networks with very fast learning speed and has attracted significant research attentions in recent years. The salient feature of ELM is that the input parameters can be randomly generated instead of being exhaustively tuned, and thus saving a great deal of computational expenses. However, the architecture of ELM has a great impact on its generalization performance and is traditionally determined by a trial and error manner. Therefore selecting an appropriate ELM architecture becomes the crucial problem in the successful application of ELM. In this paper, we propose a Robust Incremental ELM (RI-ELM), a constructive method where the hidden nodes are added one by one. We consider RI-ELM as a robust algorithm, because the suitable architecture is selected based on the Leave-One-Out (LOO) Cross-Validation procedure, a nearly unbiased and reliable criterion, but with notorious slow implementation speed. To tackle this speed issue, we propose an efficient formula that can incrementally update the LOO error with every new hidden node recruited, thus RI-ELM can secure the speed advantage of ELM and achieve good and robust performance. Furthermore, RI-ELM requires nearly zero user intervention since the architecture is automatically determined.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call