Abstract

Sequence labeling models with recurrent neural network variants, such as long short-term memory (LSTM) and gated recurrent units, show promising performance in several natural language processing tasks, such as named entity recognition (NER). Most current models use unidirectional decoders, which reason only about the past and remain limited to retaining future contexts while generating predictions. Therefore, these models suffer from their generation of unbalanced outputs. Moreover, most existing NER models utilize word embeddings for capturing similarities between words but sustain when handling previously unobserved or infrequently used words. We propose a bidirectional encoder–decoder model for addressing the problem of Arabic NER on the basis of recent work in deep learning, in which the encoder and decoder are bidirectional LSTMs. In addition to word-level embeddings, character-level embeddings are adopted, and they are combined via an embedding-level attention mechanism. Our model can dynamically determine the information that must be utilized from a word- or character-level component through this attention mechanism. Experimental results on the ANERCorp and AQMAR datasets show that the model with a bi-encoder–decoder network and embedding attention layer achieves a high F-score measure of approximately 92%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call