Abstract

ABSTRACTFocussing on the problem that redundant nodes in the kernel incremental extreme learning machine (KI-ELM) which leads to ineffective iteration increase and reduce the learning efficiency, a novel improved hybrid intelligent deep kernel incremental extreme learning machine (HI-DKIELM) based on a hybrid intelligent algorithms and kernel incremental extreme learning machine is proposed. At first, hybrid intelligent algorithms are proposed based on differential evolution (DE) and multiple population grey wolf optimization (MPGWO) methods which used to optimize the hidden layer neuron parameters and then to determine the effective hidden layer neurons number. The learning efficiency of the algorithm is improved by reducing the network complexity. Then, we bring in the deep network structure to the kernel incremental extreme learning machine to extract the original input data layer by layer gradually. The experiment results show that the HI-DKIELM methods proposed in this paper with more compact network structure have higher prediction accuracy and better ability of generation compared with other ELM methods.

Highlights

  • The artificial neural network analyses the data through the abstract simulation process to the biological neuron network, thereby realizing some functions such as data classification, system identification, function approximation and numerical estimation

  • hybrid intelligence (HI)-DKIELM consists of a deep learning network and kernel incremental extreme learning machine of cascade, where the input data through the deep leaning network to extract more information and boost the separability can achieve higher dimensional spatial mapping, the ELM network can be utilized to provide a superior classification surface

  • The HI-DKIELM proposed in this paper combines the advantages of the deep learning network and the KIELM network, and can improve the performance effectively

Read more

Summary

Introduction

The artificial neural network analyses the data through the abstract simulation process to the biological neuron network, thereby realizing some functions such as data classification, system identification, function approximation and numerical estimation. The training efficiency and learning ability of the traditional Single Hidden Layer Feed Forward Neural Networks (SLFNS) is still too low. The only free parameters need to be learned are the connections or weights between the hidden layer and output layer, and its output weight is obtained by the generalized inverse solution of the matrix using regularized least squares methods. In this manner, ELM can achieve good universal approximation capability as well as high running efficiency based on excellent network learning performance and network structure, thereby, avoiding the local minimum and slow convergence problems

Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.