Abstract

This paper introduces a novel model for semantic role labeling that makes use of neural sequence modeling techniques. Our approach is motivated by the observation that complex syntactic structures and related phenomena, such as nested subordinations and nominal predicates, are not handled well by existing models. Our model treats such instances as subsequences of lexicalized dependency paths and learns suitable embedding representations. We experimentally demonstrate that such embeddings can improve results over previous state-of-the-art semantic role labelers, and showcase qualitative improvements obtained by our method.

Highlights

  • The goal of semantic role labeling (SRL) is to identify and label the arguments of semantic predicates in a sentence according to a set of predefined relations (e.g., “who” did “what” to “whom”)

  • We develop a new neural network model that can be applied to the task of semantic role labeling

  • We experiment on the in-domain and out-of-domain test sets provided in the CoNLL-2009 shared task (Hajicet al., 2009) and compare results of our system, PathLSTM, with systems that do not involve path embeddings

Read more

Summary

Introduction

The goal of semantic role labeling (SRL) is to identify and label the arguments of semantic predicates in a sentence according to a set of predefined relations (e.g., “who” did “what” to “whom”). Semantic roles provide a layer of abstraction beyond syntactic dependency relations, such as subject and object, in that the provided labels are insensitive to syntactic alternations and can be applied to nominal predicates. The task of semantic role labeling (SRL) was pioneered by Gildea and Jurafsky (2002). In. System Analysis mate-tools *He had [troubleA0] raising [fundsA1].

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call