Abstract

Abstract For regression tasks, the existing extreme learning machine (ELM) and kernel extreme learning machine (KELM) algorithms exhibit singularity and over-fitting problems when the number of training samples is less than the number of hidden layer neurons. To overcome these shortcomings, this paper introduces random reduction kernel and regularization parameters, and the regularization incremental extreme learning machine with random reduced kernel (RKRIELM) algorithm is proposed. RKRIELM combines the kernel function and incremental extreme learning machine (I-ELM) to avoid randomness, thereby solving the singularity problem when the number of initial training samples of the ELM is less than the number of hidden layer neurons. Moreover, it uses the number of hidden layer neurons as the precondition for the loop ending of the training algorithm. Additionally, the regularization parameter is used to reduce the risk of over-fitting. Regression experiments were conducted for evaluating the proposed method, ELM, KELM, reduced kernel extreme learning machine, rotation forest selective ensemble extreme learning machine, reduced support vector regression, and gray wolf optimization support vector regression with standard data sets. The results indicate that the proposed method has a lower prediction error and better training efficiency than the other algorithms in most cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call