Abstract

In multi-label text classification, the central aim is to associate an array of descriptive labels for a better understanding of the text. There are three main challenges in doing multi-label text classification (i) a large number of text (input) features, (ii) the underlying implicit relationship between input features and output labels, and (iii) an implicit inter-label dependency. In traditional approaches to multi-label classification, these problems are not being addressed collectively. A feature selection strategy that inherently uses local features to discriminate a class and similarly global features that can distinctly separate classes can be very effective for multi-label classification. In this research, we perform a feature selection and ranking strategy based on local and global features. A Naïve Bayes classifier is being used using a combination of these two -feature sets, it is compared with the baseline implemented with the term frequency-inverse document frequency (TF-IDF). A series of experiments have been carried out on standard multi-label text datasets, using evaluation metrics like Hamming loss, Subset Accuracy and Micro/Macro F1 scores, and encouraging results are obtained.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.