Abstract

This work presents a new neuro-evolutionary model, called NEVE (Neuroevolutionary Ensemble), based on an ensemble of Multi-Layer Perceptron (MLP) neural networks for learning in nonstationary environments. NEVE makes use of quantum-inspired evolutionary models to automatically configure the ensemble members and combine their output. The quantum-inspired evolutionary models identify the most appropriate topology for each MLP network, select the most relevant input variables, determine the neural network weights and calculate the voting weight of each ensemble member. Four different approaches of NEVE are developed, varying the mechanism for detecting and treating concepts drifts, including proactive drift detection approaches. The proposed models were evaluated in real and artificial datasets, comparing the results obtained with other consolidated models in the literature. The results show that the accuracy of NEVE is higher in most cases and the best configurations are obtained using some mechanism for drift detection. These results reinforce that the neuroevolutionary ensemble approach is a robust choice for situations in which the datasets are subject to sudden changes in behaviour.

Highlights

  • The ability of a classifier to learn from incremental and dynamic data extracted from a nonstationary environment poses a challenge to the field of computational intelligence

  • In [10], we proposed a new drift detection mechanism, called DetectA (Detect Abrupt Drift), which uses a proactive detection approach

  • The experiments presented below aimed at investigating the difference between accuracy and computational performance among each of the four variations of the neuroevolutionary model (NEVE) model, as well as the impact of the voting method, ensemble size and number of neurons in the hidden layer

Read more

Summary

Introduction

The ability of a classifier to learn from incremental and dynamic data extracted from a nonstationary environment (when data distribution changes over time) poses a challenge to the field of computational intelligence. – Forget what has been learned when that knowledge is no longer useful for classifying new instances All these abilities seek, in one way or another, to deal with a phenomenon called concept drift [51, 22]. One of the older and simpler approaches is a sliding window (not always continuous) on the input data used to train the classifier with the data delimited by this window [21]. Another method is to detect deviations and, if they occur, to adjust the classifier [7]. Most models using weighted classifier ensembles determine the weights for each classifier using a set of heuristics related to classifier performance in the most recent data received [22]

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call