Abstract

Sequence labeling is a basic task in natural language processing, which is of great help to processing text information. Conventional sequence labeling approaches heavily rely on hand-crafted or language-specific features, which requires a lot of time. Therefore, most of the existing methods are based on the BiLSTM-CRF model, but how to use a neural network to extract useful information for each unit or segment in the input sequence becomes the main factor limiting the efficiency. Several BiLSTM-CRF based models for sequence labeling have been presented, but the major limitation is how to use neural networks for extracting useful representations for each unit or segment in the input sequence. In response to this problem, this paper proposes a sequence labeling algorithm based on hierarchical features and attention mechanism, which uses a hierarchical structure to integrate character-level and word-level information, and applies different attention mechanisms to these two layers of information. According to the structural characteristics of different levels, excavate more potential information. Finally, the previously captured and guided features are used for sequence tag prediction using CRF. Finally, the proposed model is subjected to comparative experiments and the results obtained are analyzed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call