Abstract

From a purely theoretical point of view, it makes sense to approach recognizing textual entailment (RTE) with the help of logic. After all, entailment matters are all about logic. In practice, only few RTE systems follow the bumpy road from words to logic. This is probably because it requires a combination of robust, deep semantic analysis and logical inference—and why develop something with this complexity if you perhaps can get away with something simpler? In this article, with the help of an RTE system based on Combinatory Categorial Grammar, Discourse Representation Theory, and first-order theorem proving, we make an empirical assessment of the logic-based approach. High precision paired with low recall is a key characteristic of this system. The bottleneck in achieving high recall is the lack of a systematic way to produce relevant background knowledge. There is a place for logic in RTE, but it is (still) overshadowed by the knowledge acquisition problem.

Highlights

  • Recognizing textual entailment—predicting whether one text entails another—is a task that embraces everything that needs to be accomplished in natural language understanding

  • The challenge of translating ambiguous text into unambiguous logical formulas is usually performed by a detailed syntactic analysis followed by a semantic analysis that produces a logical form based on the output of the syntactic parser

  • The recognizing textual entailment (RTE) data sets consists of pairs of texts, and once we have established a method to produce semantic representations (DRSs in our case) for such pairs, we arrive at the problem of translating such DRSs into formulas of first-order logic (FOL)

Read more

Summary

Introduction

Recognizing textual entailment—predicting whether one text entails another—is a task that embraces everything that needs to be accomplished in natural language understanding. The idea is simple and rooted in the formal approaches to natural language semantics mentioned before: we translate the texts into logical formulas, and use (classical) logical inference to find out whether one text entails the other or the other way around, whether they are consistent or contradictory, and so on. Even though this idea itself sounds simple, its execution is not. In this article we describe a framework for textual inference based on first-order logic and formal theory It comprises a system for RTE, Nutcracker, developed by myself over the years since the start of the RTE challenge (Bos and Markert 2005)..

Semantic Interpretation
Semantic Representations and First-Order Logic
Theorem Proving
Adding Background Knowledge
Implementation and Evaluation
Related Work
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call