Abstract

We present a modelling framework for the investigation of supervised learning in non-stationary environments. Specifically, we model two example types of learning systems: prototype-based learning vector quantization (LVQ) for classification and shallow, layered neural networks for regression tasks. We investigate so-called student–teacher scenarios in which the systems are trained from a stream of high-dimensional, labeled data. Properties of the target task are considered to be non-stationary due to drift processes while the training is performed. Different types of concept drift are studied, which affect the density of example inputs only, the target rule itself, or both. By applying methods from statistical physics, we develop a modelling framework for the mathematical analysis of the training dynamics in non-stationary environments. Our results show that standard LVQ algorithms are already suitable for the training in non-stationary environments to a certain extent. However, the application of weight decay as an explicit mechanism of forgetting does not improve the performance under the considered drift processes. Furthermore, we investigate gradient-based training of layered neural networks with sigmoidal activation functions and compare with the use of rectified linear units. Our findings show that the sensitivity to concept drift and the effectiveness of weight decay differs significantly between the two types of activation function.

Highlights

  • The topic of efficiently learning from example data in the presence of concept drift has attracted significant interest in the machine learning community

  • Averaged learning curves obtained by means of Monte Carlo simulations are shown. These simulations of the actual training process provide an independent confirmation of the ordinary differential equations (ODE)-based description and demonstrate the relevance of results obtained in the thermodynamic limit N ! 1 for relatively small, finite systems

  • We have presented a mathematical framework in which to study the influence of concept drift systematically in model scenarios

Read more

Summary

Introduction

The topic of efficiently learning from example data in the presence of concept drift has attracted significant interest in the machine learning community. Terms such as lifelong learning or continual learning have become popular keywords in this context [55]. In many technical contexts, training data is available as a nonstationary stream of observations. In such settings, the separation of training and working phase is meaningless, see [1, 17, 27, 32, 55] for reviews

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call