Abstract

In recent years, text matching has gained increasing research focus and shown great improvements. However, due to the long-distance dependency and polysemy, existing text matching models cannot effectively capture the contextual and implicit semantic information of texts. Additionally, existing models are lack of generalization ability when applied to different scenarios. In this study, we propose a novel Deep Interactive Text Matching (DITM) model by integrating the encoder layer, the co-attention layer, and the fusion layer as an interaction module, based on a matching-aggregation framework. In particular, the interaction process is iterated multiple times to obtain the in-depth interaction information, and the relationship between the text pair is extracted through the multi-perspective pooling. We conduct extensive experiments on four text matching tasks, i.e., opinion retrieval, answer selection, paraphrase identification and natural language inference. Compared with the state-of-the-art text matching methods, the proposed model achieved the best results on most of the tasks, which proves that our model could effectively capture the interactive information between text pairs, and has a high generalization ability among different tasks. Further multi-lingual investigations show the similarities of the performance between English and Chinese, which suggest that our model could be ported to other languages. The research contributes a simple and efficient implementation of text matching in a situation where there is limited computing capacity, and sheds light on leveraging text matching models to facilitate a range of downstream tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call