Abstract

Most word embedding methods are proposed with general purpose which take a word as a basic unit and learn embeddings according to words' external contexts. However, in biomedical text mining, there are many biomedical entities and syntactic chunks which contain rich domain information, and the semantic meaning of a word is also strongly related to those information. Hence, we present a biomedical domain-specific word embedding model by incorporating stem, chunk and entity to train word embeddings. We also present two deep learning architectures respectively for two biomedical text mining tasks, by which we evaluate our word embeddings and compare them with other models. Experimental results show that our biomedical domain-specific word embeddings overall outperform other general-purpose word embeddings in these deep learning methods for biomedical text mining tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call