Abstract

Feature or attribute selection is a topic that concerns selecting a subset of features, among the full features, that shows the best performance in classification accuracy. It performs as a preprocessing step to improve the classification task. The main objective of feature selection is to find useful features that represent the data and remove those features that are either irrelevant or redundant. Reducing the number of features in a dataset can lead to faster software quality model training and improved classifier performance. This paper presents a new method for dealing with feature subset selection based on conditional mutual information. The proposed method can select feature subset with minimum number of features, which are relevant to get higher average classification accuracy for datasets. The experimental results with UC Iravine datasets and Naive Bayes classifier showed that the proposed algorithm is effective and efficient in selecting subset with minimum number of features getting higher classification accuracy than the existing feature selection methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call