Abstract

Amharic is morphologically complex and under-resourced language, posing difficulties in the development of natural language processing applications. This paper presents the development of semantic role labeler for Amharic text using end-to-end deep neural network architecture. The system implicitly captures morphological, semantic and contextual features of a word at different levels of the architecture, and incorporates the syntactic structure of an input sentence. The proposed neural network architecture has four core layers from bottom to top, namely non-contextual word embedding, contextual word embedding, fully connected and sequence decoding layers. The non-contextual word embedding layer is formed from the concatenation of character-based, word-based and sentence-based word embeddings. This layer captures the morphological and semantic features of a given word by making use of BiLSTM recurrent neural network. At the contextual word embedding layer, a context sensitive embedding of a word is generated by applying a new LSTM layer on the top of the non-contextual concatenated word embedding layer. A fully connected network layer is added on top of contextual word embedding layer to supplement it by extracting dependencies among training samples in the corpus. At the sequence decoding layer, a sequence of semantic role labels is predicted using a linear-chain conditional random field algorithm by capturing the dependency among semantic role labels. In addition to the four core layers, the architecture has dropout layers to prevent overfitting problem. The proposed system achieves 94.96% accuracy and 81.2% F1 score when it is tested using test data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call