Abstract

The growing popularity of community question answering websites can be seen by the growing number of users. Many methods are proposed to identify talented users in these communities, but many of them suffer from vocabulary mismatches. The solution to this problem can be found in translation approaches. The present paper proposes two translation methods for extracting more relevant translations. The proposed methods rely on the attention mechanism. The methods use multi-label classifiers that take each question as input and predict the skills related to the question. Using the attention mechanism, the model is able to focus on specific parts of the given input and predict the correct labels. The ultimate goal of these networks is to predict skills related to questions. Using word attention scores, we can find out how relevant a single word is to a particular skill. As a result of these attention scores, we obtain more relevant translations for each skill. We then use these translations to bridge the lexical gap and improve expert retrieval results. Extensive experiments on two large sub-collections of the StackOverflow dataset demonstrate that the proposed methods outperform the best baseline method by up to 14.11/% MAP improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call