Abstract

Semantic role labeling (SRL) is one of the important tasks in natural language processing. Current end-to-end SRL, compared with traditional pipeline SRL, has achieved competitive performance via graph-based neural models. However, these are all first-order methods, where decisions for detecting predicate-argument pairs are made in isolation with local short-term features, thus being error-prone. Besides, they may potentially suffer from long-distance dependency, and perform poorly on the data-scarce scenario. In this paper, we explore a second-order end-to-end SRL model that considers simultaneously two pairs of predicate-argument when making scoring. We next propose a structural refinement mechanism to further model higher-order interactions at a global scope. After iterations of refinement, the underlying token representations are refined to be more informative. Experimental results show that our model significantly outperforms the state-of-the-art baselines on multiple benchmarks (i.e., CoNLL08, CoNLL09 and CoNLL12). Further analysis demonstrates that our framework is able to learn latent SRL constraints, being more robust on data-scarce settings, meanwhile relieving the long-range dependency issue.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call