Abstract

Feature selection plays a vital role in the field of data mining and machine learning for analyzing high-dimensional data. A popular criteria for feature selection is Mutual Information (MI) as it can capture both the linear and non-linear relationship among different features and class variable. Existing MI based feature selection methods use different approximation techniques to capture the joint performance of features, their relationship with the classes and eliminate the redundant features. However, these approximations may fail to select the optimal set of features, especially when the feature dimension is high. Besides, due to the absence of an appropriate searching strategy, these MI based approximations may select unnecessary features. To address these issues, we propose a method namely Feature Selection based on Redundancy maximized Clusters (FSRC) that creates the clusters of redundant features and then selects a subset of representative features from each cluster. We also propose to use bias corrected normalized MI in this regard. Rigorous experiments performed on thirty benchmark datasets demonstrate that FSRC outperforms the existing state-of-the-art methods in most of the cases. Moreover, FSRC is applied to three gene expression datasets which are high-dimensional but small sample datasets. The result shows that FSRC can select the features (genes) that are not only discriminating but also biologically relevant.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call