Abstract

Feature selection is an important task in which our aim is to find out important feature from vast dataset to efficiently classification so classification process involves identifying a subset of the most useful features. A feature selection algorithm evaluated from consideration of parameters efficiency and usefulness While the efficiency concerns the time required to find a separation of features, the usefulness is related to the value of the subset of features. Based on these criteria, an improved fast clustering-based feature selection algorithm (FAST) is proposed and experimentally evaluated in this paper. The improved FAST algorithm works in two steps. In the very first step, all the features are grouped into clusters by using graph-theoretic clustering methods. In the second pace, the mainly representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in dissimilar clusters are relatively independent, the clustering-based strategy of improved FAST has a high probability of producing a subset of useful and independent features. We apply the efficient minimum-spanning tree (MST) clustering method to make sure that the efficiency of our proposed algorithm will maximize. We execute many experiments on different methods for feature selection to make sure the feature we considered are best of our knowledge. We compare results of our model with fast clustering-based feature selection algorithm on different dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call