Abstract

AbstractSegmenting scientific abstracts into discourse categories like background, objective, method, result, and conclusion is useful in many downstream tasks like search, recommendation and summarization. This task of classifying each sentence in the abstract into one of a given set of discourse categories is called sequential sentence classification. Existing machine learning‐based approaches to this problem consider the content of only the abstract to obtain the neural representation of each sentence, which is then labelled with a discourse category. But this ignores the semantic information offered by the discourse labels themselves. In this paper, we propose LIHT, Label Informed Hierarchical Transformers – a method for sequential sentence classification that explicitly and hierarchically exploits the semantic information in the labels to learn label‐aware neural sentence representations. The hierarchical model helps to capture not only the fine‐grained interactions between the discourse labels and the words in the abstract at the sentence level but also the potential dependencies that may exist in the label sequence. Thus, LIHT generates label‐aware contextual sentence representations that are then labelled with a conditional random field. We evaluate LIHT on three publicly available datasets, namely, PUBMED‐RCT, NICTA‐PIBOSO and CSAbstract. The incremental gain in F1‐score in all the three cases over the respective state‐of‐the‐art approaches is around . Though the gains are modest, LIHT establishes a new performance benchmark for this task and is a novel technique of independent interest. We also perform an ablation study to identify the contribution of each component of LIHT in the observed performance, and a case study to visualize the roles of the different components of our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call