Abstract

The online sequential extreme learning machine with persistent regularization and forgetting factor (OSELM-PRFF) can avoid potential singularities or ill-posed problems of online sequential regularized extreme learning machines with forgetting factors (FR-OSELM), and is particularly suitable for modelling in non-stationary environments. However, existing algorithms for OSELM-PRFF are time-consuming or unstable in certain paradigms or parameters setups. This paper presents a novel algorithm for OSELM-PRFF, named “Cholesky factorization based” OSELM-PRFF (CF-OSELM-PRFF), which recurrently constructs an equation for extreme learning machine and efficiently solves the equation via Cholesky factorization during every cycle. CF-OSELM-PRFF deals with timeliness of samples by forgetting factor, and the regularization term in its cost function works persistently. CF-OSELM-PRFF can learn data one-by-one or chunk-by-chunk with a fixed or varying chunk size. Detailed performance comparisons between CF-OSELM-PRFF and relevant approaches are carried out on several regression problems. The numerical simulation results show that CF-OSELM-PRFF demonstrates higher computational efficiency than its counterparts, and can yield stable predictions.

Highlights

  • Single hidden-layer feedforward neural networks (SLFN) can approximate any function and form decision boundaries with arbitrary shapes if the activation function is chosen properly [1,2,3]

  • The performance of the presented CF-OSELM-PRFF is verified by a time-varying nonlinear process identification, two chaotic time series and one electricity demand prediction

  • These simulations are designed from the aspects of computation complexity and accuracy of the CF-OSELM-PRFF by comparison with the FP-Extreme Learning Machine” (ELM), FGR-OSELM, AFGR-OSELM

Read more

Summary

Introduction

Single hidden-layer feedforward neural networks (SLFN) can approximate any function and form decision boundaries with arbitrary shapes if the activation function is chosen properly [1,2,3]. To fast train SLFN, Huang et al proposed a learning algorithm called “Extreme Learning Machine” (ELM), which randomly assigns the hidden nodes parameters and determines the output weights by the Moore–Penrose generalized inverse [4,5,6]. ELM has been extended to multilayer ELMs, which play an important role in the deep learning domain [17,18,19,20,21,22,23]. The original ELM is a batch learning algorithm; all samples must be available before ELM trains SLFN. ELM has to gather old and new data together to retrain

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call