Abstract

We propose a way to build summarized logical forms (SLFs) automatically relying on semantic and discourse parsers, as well as syntactic and semantic generalization. Summarized logical forms represent the main topic and the essential part of answer content and are designed to be matched with a logical representation of a question. We explore the possibilities of building SLFs from Abstract Meaning Representation (AMR) of sentences supposed to be the most important, by selecting the AMR subgraphs. In parallel, we leverage discourse analysis of answer paragraphs to highlight more important elementary discourse units (EDUs) to convert into SLFs (less important EDUs are not converted into SLFs). The third source of SLFs is a pair-wise generalization of answers with each other. Proposed methodology is designed to improve question answering (Q/A) precision to avoid matching questions with less important, uninformative parts of a given answer (to avoid delivering foreign answers). A stand-alone evaluation of each of the three sources of SLFs is conducted as well as an assessment of a hybrid SLF generation system. We conclude that indexing the most important text fragments of an answer such as SLF instead of the whole answer improves the Q/A precision by almost 10% as measured by F1 and 8% as measured by NDCG.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.