Abstract

Many digital scholarly platforms allow researchers to conduct scientific discussions using their question-answer systems. Such discussions are helpful for emerging as well as established researchers in numerous prospects. In order to rely on such discussions, it is essential to effectively validate these questions and answers based on content as well as context. This will help in finding ‘quality’ content, improving user experience for researchers and acquiring efficient results in realistic applications such as influence analysis, topic modeling, expert finding and recommendation systems. In this research, one of the well-known scholarly platforms, ResearchGate (RG) is focused. Questions and answers of 14,000 researchers from RG are rendered. A qualitative analysis is performed to perceive various anomalies in the rendered data. To resolve these anomalies, we propose a Word2Vec based model that identifies redundancy and relevancy in questions and answers using content as well as context analysis. The proposed model is evaluated on the rendered data and the outcome suggests that the model accurately identifies redundant questions and irrelevant answers from scholarly platforms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call