Abstract

In recent years, multi-label learning has attracted considerable attention in the field of research and has been widely used in practical problems. Multi-label learning is also impacted by the curse of dimensionality similar to single label learning. Feature selection is a key pretreatment process to reduce the curse of dimensionality, and filter is a simple and fast feature selection method. A filter defines a statistical criterion for sorting features to the extent that they should be helpful in classification. The features of the highest ranking are selected, and the features of the lowest ranking will be discarded. Computing the mutual information between labels and features is one of the commonly filter models. We will deeply study the multi-label feature selection problem by computing the multivariate mutual information in this paper. Firstly, the general framework for multi-label feature selection based on mutual information is set up in this paper, and the unified formula of multi-label feature selection is given. Secondly, for the sake of reducing the algorithm's complexity, the calculation of conditional mutual information is simplified, and two different multi-label feature selection algorithms based on mutual information are proposed. Finally, the classification ability of different algorithms is compared by multi-label datasets, and the experimental results show that the proposed algorithms have good performance in feature selection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call