Abstract

Evolving fuzzy systems (EFS) have enjoyed a wide attraction in the community to handle learning from data streams in an incremental, single-pass and transparent manner. The main concentration so far lied in the development of approaches for single EFS models. Forgetting mechanisms have been used to increase their flexibility, especially to adapt quickly to changing situations such as drifting data distributions. These, however, require a forgetting factor steering the degree of timely outweighing older learned concepts. Furthermore, as being pure supervised incremental methods, they typically assume that target reference values are immediately available without any delays. In this paper, we propose a new concept of learning fuzzy systems from data streams, which we call sequential ensembling. It is able to model the recent dependencies in streams on a chunk-wise basis: for each new incoming chunk, a new fuzzy model is trained from scratch and added to the ensemble (of fuzzy systems trained before). The point is that a new chunk can be used for establishing a new fuzzy model as soon as the target values are available. This induces i.) flexibility for respecting the actual system delay in receiving target values (by setting the chunksize adequately) and ii.) fast drift handling possibilities. The latter are realized with specific prediction techniques for new data chunks based on the sequential ensemble members trained so far over time, for which we propose four different variants. These include specific spatial and timely uncertainty concepts. Finally, in order to cope with large-scale and (theoretically) infinite data streams within a reasonable amount of prediction time, we demonstrate a concept for pruning past ensemble members. The results based on two data streams show significantly improved performance compared to single EFS models in terms of a better convergence of the accumulated chunk-wise ahead prediction error trends over time. This is especially true in the case of abrupt and gradual drifts appearing in the target concept, where the sequential ensemble (especially due to recent weak members) is able to react more flexibly and quickly than (more heavy) single EFS models. In the case of input space drifts and new operating conditions, the more advanced prediction schemes, which include uncertainty weighing concepts, can significantly outperform standard averaging over all members’ outputs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call