Abstract

In the recent few years, neural-network-based word embeddings have been widely used in text mining. However, the dense representations of word embeddings act as a black box and lack interpretability. Even though word embeddings are able to capture semantic regularities in free text documents, it is not clear what kinds of semantic relations can be represented by word embeddings and how semantically-related terms can be retrieved from word embeddings. In this study, we propose a novel approach to explore the semantic relations in neural embeddings using extrinsic knowledge from WordNet and Unified Medical Language System (UMLS). We trained multiple word embeddings using health-related articles in Wikipedia. We then evaluated the performance of the different word embeddings in semantic relation term retrieval tasks. This study shows that word embeddings can group terms with diverse semantic relations together.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call