Abstract

Biomedical terms extracted using Word2vec, the most popular word embedding model in recent years, serve as the foundation for various natural language processing (NLP) applications, such as biomedical information retrieval, relation extraction, and recommendation systems. The objective of this study is to examine how changes in the ratio of the biomedical domain to general domain data in the corpus affect the extraction of similar biomedical terms using Word2vec. We downloaded abstracts of 214,892 articles from PubMed Central (PMC) and the 3.9 GB Billion Word (BW) benchmark corpus from the computer science community. The datasets were preprocessed and grouped into 11 corpora based on the ratio of BW to PMC, ranging from 0:10 to 10:0, and then Word2vec models were trained on these corpora. The cosine similarities between the biomedical terms obtained from the Word2vec models were then compared in each model. The results indicated that the models trained with both BW and PMC data outperformed the model trained only with medical data. The similarity between the biomedical terms extracted by the Word2vec model increased when the ratio of the biomedical domain to general domain data was 3:7 to 5:5. This study allows NLP researchers to apply Word2vec based on more information and increase the similarity of extracted biomedical terms to improve their effectiveness in NLP applications, such as biomedical information extraction.

Highlights

  • Owing to the rapid development of biomedical research, there is a large number of biomedical publications available online in an electronic format, and this number is increasing every year

  • Because medical literature publications contain a wealth of biomedical information, using publication data to solve a variety of biomedical problems, such as relationship extraction, has become a popular method in recent years [2,3]

  • Compared to ontology-based approaches, such as the Unified Medical Language System (UMLS) and WordNet [6,7,8], word embedding technology has the following advantages: (1) it saves time and resources because it does not require human involvement; (2) it can analyze big data and produce results that humans are incapable of producing; (3) up-to-date results are available by feeding up-to-date corpora

Read more

Summary

Introduction

Owing to the rapid development of biomedical research, there is a large number of biomedical publications available online in an electronic format, and this number is increasing every year. Because medical literature publications contain a wealth of biomedical information, using publication data to solve a variety of biomedical problems, such as relationship extraction, has become a popular method in recent years [2,3]. When processing large amounts of unlabeled unstructured data, such as treatises, word embeddings technology [4,5] is an ideal approach for obtaining semantic relationships between words. Compared to ontology-based approaches, such as the Unified Medical Language System (UMLS) and WordNet [6,7,8], word embedding technology has the following advantages: (1) it saves time and resources because it does not require human involvement; (2) it can analyze big data and produce results that humans are incapable of producing; (3) up-to-date results are available by feeding up-to-date corpora.

Objectives
Methods
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.