Abstract

The latest developments in neural semantic role labeling (SRL) have shown great performance improvements with both the dependency and span formalism/styles. Although the two styles share many similarities in linguistic meaning and computation, most previous studies focus on a single style. In this article, we define a new cross-style semantic role label convention and propose a new cross-style joint optimization model designed around the most basic linguistic meaning of a semantic role. Our work provides a solution to make the results of the two styles more comparable and allowing both formalisms of SRL to benefit from their natural connections in both linguistics and computation. Our model learns a general semantic argument structure and is capable of outputting in either style. Additionally, we propose a syntax-aided method to uniformly enhance the learning of both dependency and span representations. Experiments show that the proposed methods are effective on both span and dependency SRL benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call