Abstract

One of the most informative measures for feature extraction (FE) is mutual information (MI). In terms of MI, the optimal FE creates new features that jointly have the largest dependency on the target class. However, obtaining an accurate estimate of a high-dimensional MI as well as optimizing with respect to it is not always easy, especially when only small training sets are available. In this paper, we propose an efficient tree-based method for FE in which at each step a new feature is created by selecting and linearly combining two features such that the MI between the new feature and the class is maximized. Both the selection of the features to be combined and the estimation of the coefficients of the linear transform rely on estimating 2-D MIs. The estimation of the latter is computationally very efficient and robust. The effectiveness of our method is evaluated on several real-world data sets. The results show that the classification accuracy obtained by the proposed method is higher than that achieved by other FE methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call