Abstract

Recently, semantic role labeling (SRL) has earned a series of success with even higher performance improvements, which can be mainly attributed to syntactic integration and enhanced word representation. However, most of these efforts focus on English, while SRL on multiple languages more than English has received relatively little attention so that is kept underdevelopment. Thus this paper intends to fill the gap on multilingual SRL with special focus on the impact of syntax and contextualized word representation. Unlike existing work, we propose a novel method guided by syntactic rule to prune arguments, which enables us to integrate syntax into multilingual SRL model simply and effectively. We present a unified SRL model designed for multiple languages together with the proposed uniform syntax enhancement. Our model achieves new state-of-the-art results on the CoNLL-2009 benchmarks of all seven languages. Besides, we pose a discussion on the syntactic role among different languages and verify the effectiveness of deep enhanced representation for multilingual SRL.

Highlights

  • Semantic role labeling (SRL) aims to derive the meaning representation such as an instantiated predicate-argument structure for a sentence

  • Be it dependency or span, SRL plays a critical role in many natural language processing (NLP) tasks, including information extraction (Christensen et al, 2011), machine translation (Xiong et al, 2012) and question answering (Yih et al, 2016)

  • Applying the k-order syntactic tree pruning of He et al (2018) to the biaffine SRL model (Cai et al, 2018) does not boost the performance as expected, which indicates that exploiting syntactic clue in state-of-theart SRL models still deserves deep exploration

Read more

Summary

Introduction

Semantic role labeling (SRL) aims to derive the meaning representation such as an instantiated predicate-argument structure for a sentence. The currently popular formalisms to represent the semantic predicate-argument structure are based on dependencies and spans Their main difference is that dependency SRL annotates the syntactic head of argument rather than the entire constituent (span), and this paper will focus on the dependency-based SRL. Be it dependency or span, SRL plays a critical role in many natural language processing (NLP) tasks, including information extraction (Christensen et al, 2011), machine translation (Xiong et al, 2012) and question answering (Yih et al, 2016). Almost all of traditional SRL methods relied heavily on syntactic features, which suffered the risk of erroneous syntactic input, leading to undesired error propagation. Applying the k-order syntactic tree pruning of He et al (2018) to the biaffine SRL model (Cai et al, 2018) does not boost the performance as expected, which indicates that exploiting syntactic clue in state-of-theart SRL models still deserves deep exploration

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call