Abstract

Logical reasoning task involves diverse types of complex reasoning over text, based on the form of multiple-choice question answering (MCQA). Given the context, question and a set of options as the input, previous methods achieve superior performances on the full-data setting. However, the current benchmark dataset has the ideal assumption that the reasoning type distribution on the train split is close to the test split, which is inconsistent with many real application scenarios. To address it, there remain two problems to be studied: 1) how is the zero-shot capability of the models (train on seen types and test on unseen types)? and 2) how to enhance the perception of reasoning types for the models? For problem 1, we propose a new benchmark for generalized zero-shot logical reasoning, named ZsLR. It includes six splits based on the three type sampling strategies. For problem 2, a type-aware model TaCo is proposed. It utilizes the heuristic input reconstruction and builds a text graph with a global node. Incorporating graph reasoning and contrastive learning, TaCo can improve the type perception in the global representation. Extensive experiments on both the zero-shot and full-data settings prove the superiority of TaCo over the state-of-the-art (SOTA) methods. Also, we experiment and verify the generalization capability of TaCo on other logical reasoning dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call