Abstract

Semantic role labeling (SRL) aims to recognize the predicate-argument structure of a sentence. Syntactic information has been paid a great attention over the role of enhancing SRL. However, the latest advance shows that syntax would not be so important for SRL with the emerging much smaller gap between syntax-aware and syntax-agnostic SRL. To comprehensively explore the role of syntax for SRL task, we extend existing models and propose a unified framework to investigate more effective and more diverse ways of incorporating syntax into sequential neural networks. Exploring the effect of syntactic input quality on SRL performance, we confirm that high-quality syntactic parse could still effectively enhance syntactically-driven SRL. Using empirically optimized integration strategy, we even enlarge the gap between syntax-aware and syntax-agnostic SRL. Our framework achieves state-of-the-art results on CoNLL-2009 benchmarks both for English and Chinese, substantially outperforming all previous models.

Highlights

  • The purpose of semantic role labeling (SRL) is to derive the predicate-argument structure of each predicate in a sentence

  • We compare our models of Syn-graph convolutional networks (GCNs), Syntax Aware Long Short-Term Memory (SA-LSTM) and Tree-LSTM with previous approaches for dependency SRL on both English and Chinese

  • This paper presents a unified neural framework for dependency-based SRL, effectively incorporating syntactic information by directly modeling syntax based on syntactic parse tree

Read more

Summary

Introduction

The purpose of semantic role labeling (SRL) is to derive the predicate-argument structure of each predicate in a sentence. A popular formalism to represent the semantic predicate-argument structure is based on dependencies, namely dependency SRL, which annotates the heads of arguments rather than phrasal arguments. Given a sentence (in Figure 1), SRL is generally decomposed vorable results, which seems to be in conflict with the belief that syntactic information is an absolutely necessary prerequisite for high-performance SRL (Gildea and Palmer, 2002). It is still challenging to effectively incorporate syntactic information into neural SRL models, due to the sophisticated tree structure of syntactic relation. The syntactic parsers are unreliable on account of the risk of erroneous syntactic input, which may lead to error propagation and an unsatisfactory SRL performance

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.