Abstract

The recent success of neural networks in NLP applications has provided a strong impetus to develop supervised models for semantic role labeling (SRL) that forego the requirement for extensive feature engineering. Recent state-of-the-art approaches require high-quality annotated datasets that are costly to obtain and almost unavailable for low-resource languages. We present a semi-supervised approach that utilizes both labeled and unlabeled data to provide performance improvement over a mere supervised SRL model. We show that our proposed semi-supervised SRL model provides larger improvement over a supervised model in the scenario where labeled training data size is small. Our SRL system leverages unlabeled data under the language modeling paradigm. We demonstrate that the incorporation of a self pre-trained bidirectional language model (S-PrLM) into a SRL system can help in SRL performance improvement by learning composition functions from the unlabeled data. Previous researches have concluded that syntax information is very useful for high-performing SRL systems, so we incorporate syntax information by employing an unsupervised approach to leverage dependency path information to connect argument candidates in vector space, which helps in distinguishing arguments with similar contexts but different syntactic functions. The basic idea is to connect predicate ( w p ) with argument candidate ( w a ) with the dependency path ( r ) between them in the embedding space. Experiments on the CoNLL-2008 and CoNLL-2009 datasets confirm that our full SRL model outperforms previous best models in terms of F 1 score.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call