Abstract
The online sequential extreme learning machine (OS-ELM) algorithm is an on-line and incremental learning method, which can learn data one-by-one or chunk-by-chunk with a fixed or varying chunk size. And OS-ELM achieves the same learning performance as ELM trained by the complete set of data. However, in on-line learning environments, the concepts to be learned may change with time, a feature referred to as concept drift. To use ELMs in such non-stationary environments, a forgetting parameters extreme learning machine (FP-ELM) is proposed in this paper. The proposed FP-ELM can achieve incremental and on-line learning, just like OS-ELM. Furthermore, FP-ELM will assign a forgetting parameter to the previous training data according to the current performance to adapt to possible changes after a new chunk comes. The regularized optimization method is used to avoid estimator windup. Performance comparisons between FP-ELM and two frequently used ensemble approaches are carried out on several regression and classification problems with concept drift. The experimental results show that FP-ELM produces comparable or better performance with lower training time.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.