Abstract

In extreme learning machine (ELM) framework, the hidden layer setting determines its generalization ability; and in presence of outliers in the training set, weights between hidden layer and output layer based on the least squares would be overly estimated. To address these two problems in ELM implementation, we extend robust penalized statistical framework in ELM and propose a general robust penalized ELM, which consists of two components (robust loss function and regularization item), for regression to improve the efficiency of ELM training with more elegant neural network structure resulting in more accurate predictions. We investigate six different loss functions ( $$l_1$$ -norm loss, $$l_2$$ -norm loss, Huber loss, Bisquare loss, exponential squared loss and Lncosh loss) and two regularization strategies (lasso penalty and ridge penalty). Furthermore, we present two training procedures for our robust penalized ELM via iterative reweighted least squares method with hyper-parameter setting by cross-validation with lasso penalty and ridge penalty, respectively. Finally, the proposed robust penalized ELM is employed in an ultra-short-term wind speed forecasting study, and our framework is confirmed in this specific application producing more effective predictions according to the multi-step forecasting performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call