Abstract

In formal logic-based approaches to Recognizing Textual Entailment (RTE), a Combinatory Categorial Grammar (CCG) parser is used to parse input premises and hypotheses to obtain their logical formulas. Here, it is important that the parser processes the sentences consistently; failing to recognize the similar syntactic structure results in inconsistent predicate argument structures among them, in which case the succeeding theorem proving is doomed to failure. In this work, we present a simple method to extend an existing CCG parser to parse a set of sentences consistently, which is achieved with an inter-sentence modeling with Markov Random Fields (MRF). When combined with existing logic-based systems, our method always shows improvement in the RTE experiments on English and Japanese languages.

Highlights

  • While today’s neural network-based syntactic parsers (Dyer et al, 2016; Dozat and Manning, 2017; Yoshikawa et al, 2017) have proven successful on sentence level modeling, it is still challenging to accurately process texts that go beyond a single sentence

  • For ccg2lambda, we found no improvements in Recognizing Textual Entailment (RTE) performance with our Markov Random Fields (MRF), while for LangPro, we found that MRF guides to solve additional two problems

  • Our MRF consistently contributes to the improvement of the accuracies for both ccg2lambda and LangPro

Read more

Summary

Introduction

While today’s neural network-based syntactic parsers (Dyer et al, 2016; Dozat and Manning, 2017; Yoshikawa et al, 2017) have proven successful on sentence level modeling, it is still challenging to accurately process texts that go beyond a single sentence (e.g. coreference resolution, discourse structure analysis). In this work we focus, among others, on the consistent analysis of multiple sentences in a document. This is as an important problem in reasoning tasks as other document analysis. Existing methods based on formal logic (Bos, 2008; Martınez-Gomez et al, 2017; Abzianidze, 2017) obtain logical formulas for T and H using an off-the-shelf CCG parser, and feed them to a theorem prover. The standard approach to mapping CCG trees onto logical formulas is to assign λ-terms to the words in a sentence (a) An example semantic template: V ⊢ S\N P : λF.(∃x.(F (x) ∧ ∃e.V (e, x)))

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.