Abstract

Abstract Semantic inference plays an essential role in numerous Natural Language Processing (NLP) tasks, such as question answering, machine reading and text summarization. Reasoning in natural language is inseparable from the knowledge about inference, which is often represented in the form of predicate-based entailment rules. Many efforts have been dedicated to extracting entailment rules from text corpora by utilizing statistical methodology including distributional hypothesis and Latent Dirichlet Allocation (LDA). However, these studies could not give equal consideration to both coverage and accuracy of the mined rules, which brings instability to downstream applications. To solve this problem, this paper proposes a novel model named Deep Contextual Architecture (DCA), which is driven by Deep Belief Networks (DBNs), for the task of mining predicate-based inference rules from texts. Besides previously used statistical contextual information, we also involve semantic meanings represented by word embeddings into DBNs to learn topic level representation of predicates. Combining benefits from both kinds of information, the proposed DCA model shows potential for better modeling the context of predicates. Evaluation on public datasets demonstrates that our method outperforms several strong baselines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.