Abstract

Evolving fuzzy neural classifiers are incremental, adaptive models that use new samples to update the architecture and parameters of the models with new incoming data samples, typically occurring in form of data streams for classification problems. Most of the techniques assume that the target labels are permanently given as updating their structures and parameters in a fully supervised manner. This paper aims to implement ideas based on the concept of active learning in order to select the data most relevant for updating the model. This may greatly reduce annoying and costly labeling efforts for users/operators in an online system. Therefore, we propose an online active learning (oAL) methodology, which is closely linked to the internal evolving learning engine for fuzzy neurons, which is based on incremental data-cloud formation. It is thus based on the evaluation of the specificity of the current clouds, and especially by the change in their specificity with new (unsupervised) samples, in order to identify those samples carrying relevant information to the update of previously formed clouds. This is combined with the unsupervised cloud evolution criterion, which upon its fulfillment indicates a new knowledge contained in the data for which the class response needs to be known (thus should be selected for labeling feedback). In synergy to the evolving fuzzy neural classifier, it acts in an incremental single-pass manner, not using any past samples, which makes it extremely fast, as only fuzzy neurons attached to a new sample need to be checked for the degree of their specificity change. To prove the technique’s efficiency, tests with binary classification streams commonly used by the machine learning community were conducted for evaluation purposes. The number of supervised samples for model updates could be significantly reduced with a low or even negligible decrease in the classification accuracy trends, while a random selection of samples (with the same percentages as selected by our oAL approach) showed large performance downtrends. Furthermore, a very similar number of rule evolution trends could be observed with different percentages of selected samples, which indicates good robustness of our method with respect to knowledge extraction (as non-changing).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call