Abstract

Feature selection entails identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm can be assessed in terms of both efficiency and effectiveness. While efficiency is concerned with the time required to find a subset of features, effectiveness is concerned with the quality of the subset of features. This paper proposes and experimentally evaluates a fast clustering-based feature selection algorithm, FAST, based on these criteria.  The FAST algorithm operates in two steps. Graph-theoretic clustering methods are used to partition characteristics into clusters in the initial stage. The most representative feature from each cluster that is strongly related to target classes is chosen in the second stage to construct a subset of features. Because the properties in various clusters are relatively independent, FAST's clustering-based technique is likely to produce a subset of valuable and independent features. We use the efficient Minimum-spanning tree clustering method to ensure FAST's efficiency. An empirical study is conducted to assess the efficiency and effectiveness of the FAST algorithm. FAST and several representative feature selection algorithms, such as FCBF, ReliefF, CFS, Consist, and FOCUS-SF, are compared to four types of well-known classifiers, including the probability-based Naive Bayes, the tree-based C4.5, the instance-based IB1, and the rule-based RIPPER, before and after feature selection. FAST not only provides smaller subsets of features but also improves the performances of the four types of classifiers, according to the findings, which were based on 35 publicly accessible real-world high-dimensional image, microarray, and text data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call