Abstract

Compared with conventional binary relation extraction, n-ary relation extraction is a particularly challenging task due to the presence of multiple entities that span across sentences. Although current methods have achieved remarkable results in this area, they often rely on complex modeling like dependency parsing, which easily suffers from error propagation. To address this predicament, this paper proposes a novel framework for n-ary relation extraction that utilizes Machine Reading Comprehension (MRC) as its foundation. In particular, considering the unnameable relations or sub-relations between multiple entities, we resort to learning continuous prompting questions to make up for the deficiency of natural language questions. Additionally, to alleviate the high semantic similarity between close relation classes, we obtain supplementary prompt messages (i.e., additional knowledge) according to the statistical results for each relation class so as to equip the model with a better capacity to make distinctions. Finally, since the one-turn mechanism of MRC is prone to mistakes, especially in those challenging n-ary tasks, we design a double-check mechanism that generates questions from multi-perspective to ascertain the final relation between all the entities by aggregating the answers to all questions. Our method has demonstrated the most advanced results on n-ary relation extraction datasets through extensive experimentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call