Abstract

Nowadays, community-based question answering (CQA) systems are popular and have accumulated a large number of questions and answers provided by users. How to accurately match relevant answers for a given question is an essential function in CQA tasks. Recent effective methods utilize word-pair interactions between questions and answers for CQA matching. However, these approaches usually encode questions and answers independently and ignore the fact that they can complement and enhance each other to provide better representations and thus more implicit interactions can be captured. In addition, the visual information, social information and the variable-length problem are usually ignored by most existing approaches. In this paper, a Social-aware Multi-modal Co-attention Convolutional Matching method (SMCACM) is proposed, which models the multi-modal content and social context of questions and answers in a unified framework for CQA matching. A novel co-attention network is proposed to extract complementary information from questions and answers to enhance each other for obtaining better representations, through which our model can capture more implicit interactions between questions and answers. In addition to textual content, our model uses object detection techniques and a meta-path based heterogeneous social representation learning approach to take advantage of the visual content and social context in CQA systems, respectively. Finally, a pooling-based convolutional matching network is designed to infer the matching score based on the complemented questions and answers, which can accept variable-length answers as inputs without padding or cutting. Experimental results on two real-world datasets demonstrate the superior performance of SMCACM compared with other state-of-the-art algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call