Abstract

The task of classifying the semantic relation between two nominals in a sentence is quite challenging due to lack of a large amount of labeled data. Existing models of semantic relation classification were built on either synthetic training data generated from unlabeled data or hand-annotated training data. Meanwhile, previous work showed that the preposition and verb in the sentences indicate important clues to discover the semantic relation between nominals. In this paper we attempt to exploit both labeled and unlabeled data for semantic relation classification under the framework of semi-supervised multi-task learning. Specifically, to improve the generalization performance of a semantic relation classification model, we leverage the information contained in the training signals of multiple related tasks, e.g, prediction of preposition and verb labels. Results on SemEval 2007 task 4 and SemEval 2010 task 8 indicate that semi-supervised multi-task learning method can help semantic relation classification, resulting in comparable or even better performance than the state of art systems in SemEval 2007 task 4.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call