Abstract

In multi-label learning, each instance is associated with a subset of predefined labels. One common approach for multi-label classification has been proposed in Godbole and Sarawagi (2004) based on stacking which is called as Meta Binary Relevance (MBR). It uses two layers of binary models and feeds the outputs of the first layer to all binary models of the second layer. Hence, initial predicted class labels (in the first layer) are attached to the original features to have a new prediction of the classes in the second layer. To predict a specific label in the second layer, irrelevant labels are also used as the noisy features. This is why; Nearest Neighbor (NN) as a sensitive classifier to noisy features had been not, up to now, a proper base classifier in stacking method and all of its merits including simplicity, interpretability, global stability to noisy labels and good performance, are lost. As the first contribution, a popular feature weighting in NN classification is used here to solve uncorrelated labels problem. It tunes a parametric distance function by gradient descent to minimize the classification error on training data. However, it is known that some other objectives including F-measure are more suitable than classification error on learning imbalanced data. The second contribution of this paper is extending this method in order to improve F-measure. In our experimental study, the proposed method has been compared with and outperforms state-of-the-art multi-label classifiers in the literature.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call