Abstract

Multi-premise natural language inference provides important technical support for automatic question answering, machine reading comprehension and other application fields. Existing approaches for Multiple Premises Entailment (MPE) task are to convert MPE data into Single Premise Entailment (SPE) data format, then MPE is handled in the same way as SPE. This process ignores the unique characteristics of multi-premise, which will result in loss of semantics. This paper proposes a mechanism based on Attention and Gate Fusion Network (AGNet). AGNet adopts a “Local Matching-Integration” strategy to consider the characteristics of multi-premise. In this process, an attention mechanism combined with a matching gate mechanism can fully describe the relationship between the premise and hypothesis. A self-attention mechanism and a fusion gate mechanism can deeply exploit the relationship from the multi-premise. In order to avoid over-fitting problem, we propose a pre-training method for our model. In terms of computational complexity, AGNet has good parallelism, reduces the time complexity to O(1) in the process of matching. The experiments show that our model has achieved new state-of-the-art results on MPE test set.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.