Abstract

Projecting the input text pair into a common semantic space where the matching function can be readily learned is an essential step for asymmetrical text matching. In the practice, it is often observed that the feature vectors from asymmetrical texts show a tendency to be gradually undistinguishable in the semantic space as the model is trained. However, the phenomenon is overlooked in existing studies. As a result, the feature vectors are constructed without any regularization, which inevitably hinders the learning of the downstream matching functions. In this paper, we first exploit the phenomenon and propose DDR-Match, a novel matching framework tailored for asymmetrical text matching. Specifically, in DDR-Match, a distribution distance-based regularizer is devised to accelerate the fusion of sequence representations corresponding to different domains in the semantic space. Then, we provide three instances of DDR-Match and make a comparison among them. DDR-Match is compatible with existing text matching methods by incorporating them as the underlying matching model. Four popular text matching methods are exploited in the paper. Extensive experimental results based on five publicly available benchmarks showed that DDR-Match consistently outperformed its underlying methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.