Abstract

We propose a method for adapting Semantic Role Labeling (SRL) systems from a source domain to a target domain by combining a neural language model and linguistic resources to generate additional training examples. We primarily aim to improve the results of Location, Time, Manner and Direction roles. In our methodology, main words of selected predicates and arguments in the source-domain training data are replaced with words from the target domain. The replacement words are generated by a language model and then filtered by several linguistic filters (including Part-Of-Speech (POS), WordNet and Predicate constraints). In experiments on the out-of-domain CoNLL 2009 data, with the Recurrent Neural Network Language Model (RNNLM) and a well-known semantic parser from Lund University, we show enhanced recall and F1 without penalizing precision on the four targeted roles. These results improve the results of the same SRL system without using the language model and the linguistic resources, and are better than the results of the same SRL system that is trained with examples that are enriched with word embeddings. We also demonstrate the importance of using a language model and the vocabulary of the target domain when generating new training examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call