Abstract
Abstract Multi-label learning is the problem where each instance is associated with multiple labels simultaneously. Binary Relevance (BR), which comes from the idea of one-vs-all for multi-class classification, is a representative algorithm for multi-label learning. It ignores label correlations and may suffer the class-imbalance problem. However, BR is mostly implemented by decomposing into many independent binary classifiers and learning each individually, making it difficult to extend. Moreover, when extending BR by learning together, the mostly used least squared loss function is more suitable for a regression task rather than a classification task. In this paper, we propose a unified framework implementing linear BR for multi-label learning, which is easy to extend and can also be applied in one-vs-all for multi-class classification. We mainly focus on five popular convex loss functions. Experimental results show that the unified framework achieves competitive performances than traditional implementations and some other well-established algorithms for both multi-label learning and multi-class classification. Furthermore, logistic, exponential and least squared hinge loss functions are more suitable for multi-label learning, while logistic and least squared hinge loss functions are more suitable for multi-class classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.