Abstract

Large-scale Multi-label Text Classification (LMTC) has a wide range of Natural Language Processing (NLP) applications and presents interesting challenges. First, not all labels are well represented in the training set, due to the very large label set and the skewed label distributions of \lmtc datasets. Also, label hierarchies and differences in human labelling guidelines may affect graph-aware annotation proximity. Finally, the label hierarchies are periodically updated, requiring LMTC models capable of zero-shot generalization. Current state-of-the-art LMTC models employ Label-Wise Attention Networks (LWANs), which (1) typically treat LMTC as flat multi-label classification; (2) may use the label hierarchy to improve zero-shot learning, although this practice is vastly understudied; and (3) have not been combined with pre-trained Transformers (e.g. BERT), which have led to state-of-the-art results in several NLP benchmarks. Here, for the first time, we empirically evaluate a battery of LMTC methods from vanilla LWANs to hierarchical classification approaches and transfer learning, on frequent, few, and zero-shot learning on three datasets from different domains. We show that hierarchical methods based on Probabilistic Label Trees (PLTs) outperform LWANs. Furthermore, we show that Transformer-based approaches outperform the state-of-the-art in two of the datasets, and we propose a new state-of-the-art method which combines BERT with LWAN. Finally, we propose new models that leverage the label hierarchy to improve few and zero-shot learning, considering on each dataset a graph-aware annotation proximity measure that we introduce.

Highlights

  • Large-scale Multi-label Text Classification (LMTC) is the task of assigning a subset of labels from a large predefined set to a given document

  • We show that hierarchical LMTC approaches based on Probabilistic Label Trees (PLTs) (Prabhu et al, 2018; Khandagale et al, 2019; You et al, 2019) outperform flat neural state-of-the-art methods, i.e., LabelWise Attention Networks (LWANs) (Mullenbach et al, 2018) in two out of three datasets (EURLEX57K, AMAZON13K)

  • We repeated the experiments of BIGRU-LWAN on MIMIC-III after shuffling the words of the documents, and performance dropped by approx. 7.7% across all measures, matching the performance of PLT-based methods

Read more

Summary

Introduction

Large-scale Multi-label Text Classification (LMTC) is the task of assigning a subset of labels from a large predefined set (typically thousands) to a given document. Apart from the large label space, LMTC datasets often have skewed label distributions (e.g., some labels have few or no training examples) and a label hierarchy with different labelling guidelines (e.g., they may require documents to be tagged only with leaf nodes, or they may allow both leaf and other nodes to be used). The latter affects graph-aware annotation proximity (GAP), i.e., the proximity of the gold labels in the label hierarchy (see Section 4.1). Following the work of Rios and Kavuluru (2018) for few and zero-shot learning on MIMIC-III, we investigate the use of structural information from the label hierarchy in LWAN. We propose new LWAN-based models with improved performance in these settings, taking into account the labelling guidelines of each dataset and a graph-aware annotation proximity (GAP) measure that we introduce

Advances and limitations in LMTC
The new paradigm of transfer learning
Few and zero-shot learning in LMTC
Notation for neural methods
Flat neural methods
Hierarchical PLT-based methods
Transfer learning based LMTC
Zero-shot LMTC
Graph-aware Annotation Proximity
Evaluation Measures
Implementation Details
Method
Zero-shot Learning
Conclusions
A Additional Implementation Details
C Additional Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call