Abstract

Extreme learning machine (ELM) has been intensively studied during the last decade due to its high efficiency, effectiveness and easy-to-implementation. Recently, many variants, such as parallel ELM (P-ELM) incremental ELM and online sequential ELM(OS-ELM), have been proposed to improve its timing performance and enable its ability of incremental learning. In this paper, we propose two parallel variants, termed as data parallel regularized ELM (DPR-ELM) and model parallel regularized ELM (MPR-ELM), to further improve the computational efficiency of ELM in handling large scale learning tasks. Collectively, these two variants are called as parallel regularized ELM (PR-ELM). Specifically, our proposed algorithms are implemented on cluster with Message Passing Interface (MPI) environment. In summary, the advantages of the proposed PR-ELM algorithms over existing variants are highlighted as follows: (1) They have better parallelism since they train each data block or each sub-model independently. (2) They dramatically reduce the requirement of huge runtime memory since the whole datasets or the whole model are split into small chunks or sub-models. (3) Both DPR-ELM and MPR-ELM have better scalability since they are able to be configured on clusters with many more computing nodes. Extensive experiments have been conducted to validate the effectiveness of the proposed algorithms. As shown, DPR-ELM and MPR-ELM achieve 5.15× and 3.5× speedup on cluster with six nodes, respectively. Moreover, the speedup of DPR-ELM increases to 5.85× with the increase of the size of dataset, and this quantity is increased to 4× for MPR-ELM with the increase of the number of hidden nodes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.