Abstract

Every year, around 28,100 journals publish 2.5 million research publications. Search engines, digital libraries, and citation indexes are used extensively to search these publications. When a user submits a query, it generates a large number of documents among which just a few are relevant. Due to inadequate indexing, the resultant documents are largely unstructured. Publicly known systems mostly index the research papers using keywords rather than using subject hierarchy. Numerous methods reported for performing single-label classification (SLC) or multi-label classification (MLC) are based on content and metadata features. Content-based techniques offer higher outcomes due to the extreme richness of features. But the drawback of content-based techniques is the unavailability of full text in most cases. The use of metadata-based parameters, such as title, keywords, and general terms, acts as an alternative to content. However, existing metadata-based techniques indicate low accuracy due to the use of traditional statistical measures to express textual properties in quantitative form, such as BOW, TF, and TFIDF. These measures may not establish the semantic context of the words. The existing MLC techniques require a specified threshold value to map articles into predetermined categories for which domain knowledge is necessary. The objective of this paper is to get over the limitations of SLC and MLC techniques. To capture the semantic and contextual information of words, the suggested approach leverages the Word2Vec paradigm for textual representation. The suggested model determines threshold values using rigorous data analysis, obviating the necessity for domain expertise. Experimentation is carried out on two datasets from the field of computer science (JUCS and ACM). In comparison to current state-of-the-art methodologies, the proposed model performed well. Experiments yielded average accuracy of 0.86 and 0.84 for JUCS and ACM for SLC, and 0.81 and 0.80 for JUCS and ACM for MLC. On both datasets, the proposed SLC model improved the accuracy up to 4%, while the proposed MLC model increased the accuracy up to 3%.

Highlights

  • Every year, around 28,100 journals publish 2.5 million research publications

  • When we used the metadata separately in single-label classification (SLC), we found that the title metadata had a higher average accuracy of 0.81 and 0.79 for the JUCS and Association of Computing Machinery (ACM) datasets, respectively

  • In the case of double metadata, the combination of title and keywords worked exceptionally well, with average accuracy of 0.86 and 0.84 for the JUCS and ACM datasets, respectively When we looked at the metadata separately in multi-label classification (MLC), we found that the keywords metadata had a higher average accuracy of 0.75 and 0.73 for the JUCS and ACM datasets, respectively

Read more

Summary

Introduction

Around 28,100 journals publish 2.5 million research publications. Search engines, digital libraries, and citation indexes are used extensively to search these publications. As is the case with some re-known journals like ACM and IEEE have not made the entire articles publicly available In such scenarios, some scholars have turned to metadata as an alternate method of categorizing research p­ apers[12,13,14]. The current state-of-the-art approaches use the traditional statistical measures such as Term Frequency (TF), Bag of Word (BOW), and Term Frequency and Inverse Document Frequency (TFIDF)[9,10,11,12,13,14] As a result, they have overlooked the semantic and contextual information of keywords, potentially leading to the incorrect categorization of research publications. It was created by Mikolov et al at Google in 2­ 01319

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call