Abstract

Semantic role labeling is an effective approach to understand underlying meanings associated with word relationships in natural language sentences. Recent studies using deep neural networks, specifically, recurrent neural networks, have significantly improved traditional shallow models. However, due to the limitation of recurrent updates, they require long training time over a large data set. Moreover, they could not capture the hierarchical structures of languages. We propose a novel deep neural model, providing selective connections among attentive representations, which remove the recurrent updates, for semantic role labeling. Experimental results show that our model performs better in accuracy compared to the state-of-the-art studies. Our model achieves 86.6 F1 scores and 83.6 F1 scores on the CoNLL 2005 and CoNLL 2012 shared tasks, respectively. The accuracy gains are improved by capturing the hierarchical information using the connection module. Moreover, we show that our model can be parallelized to avoid the repetitive updates of the model. As a result, our model reduces the training time by 62 percentages from the baseline.

Highlights

  • Semantic representation is important for the machine to understand the meaning of data

  • We compare our models to five existing studies: two bidirectional long short-term memory (LSTM)-based models denoted as He et al [9,11], include the state-of-the-art Semantic role labeling (SRL) model, and a self-attention-based model denoted as Tan et al [15] and others [7,25]

  • We proposed a deep self-attention network for the task of SRL

Read more

Summary

Introduction

Semantic representation is important for the machine to understand the meaning of data. Semantic role labeling (SRL) is a task of constructing a dependency structure to represent the semantics in natural language. In specific, it assigns pre-defined labels, called semantic roles, about ‘when’, ‘who’, ‘what to whom’ or ‘where’, to the non-predicate words dependent to predicates. The aim of SRL is accurately predicting the labels among verbs and other words given a sentence. SRL studies have been applied to benefit many natural language processing (NLP) applications, such as question answering [3] machine translation [4], and many others [5]

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.