Abstract

Syllogism is a type of everyday reasoning. For instance, given that ‘Avicenna wrote the famous book the Canon of Medicine’ and ‘The Canon of Medicine has influenced modern medicine,’ it can be concluded that ‘Avicenna has influenced modern medicine.’ This study revolves around syllogistic natural language generation (NLG). The Avicenna corpus (https://github.com/ZeinabAghahadi/Syllogistic-Commonsense-Reasoning) was developed as a benchmark for syllogistic NLG. In this respect, once the syllogistic relation between two premises is recognised [Aghahadi, Z., & Talebpour, A. (2022). Language-based syllogistic reasoning using deep neural networks. Cognitive Semantics, 8(2)], the Avicenna-trained models learn to generate the conclusion sentence. The experiments were performed using state-of-the-art pre-trained text generative models and the accuracy was improved up to 32% when transfer learning was adopted. The model’s confusion in detecting the middle-term was one of the main categories of errors that showed up in the error analysis. This issue indicates that the model learns how to extract new facts based on the premises, but it faces a challenge in commonsense reasoning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call