Abstract

The Extreme Learning Machine (ELM) is a novel learning scheme for single hidden layer feedforward neural networks, and it has attracted a great deal of research attention since the last decade because of its extremely fast learning speed. One popular variant of ELM is the Online Sequential ELM (OS-ELM), which can deal with sequential learning tasks. However, limitations exist in the OS-ELM such as requiring the initialization phase, pre-defined important parameters, running into singularity problem, inconsistent and potentially unreliable performance. In this paper, an Online Sequential Regularized ELM (OS-RELM) is proposed to address the aforementioned issues. The main idea is to incorporate the regularization method to further improve its generalization performance, and a new update formula is used to eliminate the initialization phase. To enable the OS-RELM to adapt to new data in an effective and reliable manner, an efficient Leave-One-Out Cross-Validation method is implemented. Finally, a matrix reconstruction method is employed to address the unstable update issue. Unlike some ELM variants that greatly jeopardize the speed advantage of the ELM, we strive to limit the computational load from the proposed scheme. Simulation results on benchmark problems show that the OS-RELM is a reliable and efficient algorithm with superior generalization performance than the OS-ELM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call