Abstract

This article introduces an approach that learns segment-level context for sequence labeling in natural language processing (NLP). Previous approaches limit their basic unit to a word for feature extraction because sequence labeling is a token-level task in which labels are annotated word-by-word. However, the text segment is an ultimate unit for labeling, and we are easily able to obtain segment information from annotated labels in a IOB/IOBES format. Most neural sequence labeling models expand their learning capacity by employing additional layers, such as a character-level layer, or jointly training NLP tasks with common knowledge. The architecture of our model is based on the charLSTM-BiLSTM-CRF model, and we extend the model with an additional segment-level layer called segLSTM. We therefore suggest a sequence labeling algorithm called charLSTM-BiLSTM-CRF-segLSTM $^{sLM}$ which employs an additional segment-level long short-term memory (LSTM) that trains features by learning adjacent context in a segment. We demonstrate the performance of our model on four sequence labeling datasets, namely, Peen Tree Bank, CoNLL 2000, CoNLL 2003, and OntoNotes 5.0. Experimental results show that our model performs better than state-of-the-art variants of BiLSTM-CRF. In particular, the proposed model enhances the performance of tasks for finding appropriate labels of multiple token segments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call