Abstract

One of the ways of increasing recognition ability in classification problem is removing outlier entries as well as redundant and unnecessary features from training set. Filtering and feature selection can have large impact on classifier accuracy and area under the curve (AUC), as noisy data can confuse classifier and lead it to catch wrong patterns in training data. The common approach in data filtering is using proximity graphs. However, the problem of the optimal filtering parameters selection is still insufficiently researched. In this paper filtering procedure based on k-nearest neighbours proximity graph was used. Filtering parameters selection was adopted as the solution of outlier minimization problem: k-NN proximity graph, power of distance and threshold parameters are selected in order to minimize outlier percentage in training data. Then performance of six commonly used classifiers (Logistic Regression, Naïve Bayes, Neural Network, Random Forest, Support Vector Machine and Decision Tree) and one heterogeneous classifiers combiner (DES-LA) are compared with and without filtering. Dynamic ensemble selection (DES) systems work by estimating the level of competence of each classifier from a pool of classifiers. Only the most competent ones are selected to classify a given test sample. This is achieved by defining a criterion to measure the level of competence of base classifiers, such as, its accuracy in local regions of the feature space around the query instance. In our case the combiner is based on the local accuracy of single classifiers and its output is a linear combination of single classifiers ranking. As results of filtering, accuracy of DES-LA combiner shows big increase for low-accuracy datasets. But filtering doesn’t have sufficient impact on DES-LA performance while working with high-accuracy datasets. The results are discussed, and classifiers, which performance was highly affected by pre-processing filtering step, are defined. The main contribution of the paper is introducing modifications to the DES-LA combiner, as well as comparative analysis of filtering impact on the classifiers of various type. Testing the filtering algorithm on real case dataset (Taiwan default credit card dataset) confirmed the efficiency of automatic filtering approach.

Highlights

  • IntroductionPerformance of classification model may vary a lot depending on its structure

  • BackgroundIn different classifying problems, performance of classification model may vary a lot depending on its structure

  • We use filtering technique based on k nearest neighbours (k-Neural Networks (NN)) graphs with automatic parameter evaluation, unified for all classifiers

Read more

Summary

Introduction

Performance of classification model may vary a lot depending on its structure. Ala’raj et al J Big Data (2020) 7:15 and Logistic Regression may have drastically lower accuracy than more complex classifiers like SVM or Neural Network, especially on data with big number of features [1]. Building heterogeneous combiners using classifiers with different performance may be useless: combiner will have better performance than simple classifiers and worse than complex classifiers. The reason of it is that classifiers with lower performance have the same impact on the final result as more reliable. We use filtering technique based on k nearest neighbours (k-NN) graphs (a node is connected to its k nearest neighbours) with automatic parameter evaluation, unified for all classifiers

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call