Abstract
Feature selection used for dimensionality reduction of the feature space plays an important role in multi-label learning where high-dimensional data are involved. Although most existing multi-label feature selection approaches can deal with the problem of label ambiguity which mainly focuses on the assumption of uniform distribution with logical labels, it cannot be applied to many practical applications where the significance of related label for every instance tends to be different. To deal with this issue, in this study, label distribution learning covered with a certain real number of labels is introduced to design a model for the labeling-significance. Nevertheless, multi-label feature selection is limited to handling only labels consisting of logical relations. In order to solve this problem, combining the random variable distribution with granular computing, we first propose a label enhancement algorithm to transform logical labels in multi-label data into label distribution with more supervised information, which can mine the hidden label significance from every instance. On this basis, to remove some redundant or irrelevant features in multi-label data, a label distribution feature selection algorithm using mutual information and label enhancement is developed. Finally, the experimental results show that the performance of the proposed method is superior to the other state-of-the-art approaches when dealing with multi-label data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.