Abstract

Although Binary Relevance (BR) is an adaptive and conceptually simple multi-label learning technique, its inability to exploit label dependencies and other inherent problems in multi-label examples makes it difficult to generalize well in the classification of real-world multi-label examples like annotated images. Thus, to strengthen the generalization ability of Binary Relevance, this study used Multi-label Linear Discriminant Analysis (MLDA) as a preprocessing technique to take care of the label dependencies, the curse of dimensionality, and label over counting inherent in multi-labeled images. After that, Binary Relevance with K Nearest Neighbor as the base learner was fitted and its classification performance was evaluated on randomly selected 1000 images with a label cardinality of 2.149 of the five most frequent categories, namely; "person", "chair", "bottle", "dining table" and "cup" in the Microsoft Common Objects in Context 2017 (MS COCO 2017) dataset. Experimental results showed that micro averages of precision, recall, and f1-score of Multi-label Linear Discriminant Analysis followed by Binary Relevance K Nearest Neighbor (MLDA-BRKNN) achieved a more than 30% improvement in classification of the 1000 annotated images in the dataset when compared with the micro averages of precision, recall, and f1-score of Binary Relevance K Nearest Neighbor (BRKNN), which was used as the reference classifier method in this study.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.