Abstract
High quality word embeddings are of great significance to advance applications of biomedical natural language processing. In recent years, a surge of interest on how to learn good embeddings and evaluate embedding quality based on English medical text has become increasing evident, however a limited number of studies based on Chinese medical text, particularly Chinese clinical records, were performed. Herein, we proposed a novel approach of improving the quality of learned embeddings using out-domain data as a supplementary in the case of limited Chinese clinical records. Moreover, the embedding quality evaluation method was conducted based on Medical Conceptual Similarity Property. The experimental results revealed that selecting good training samples was necessary, and collecting right amount of out-domain data and trading off between the quality of embeddings and the training time consumption were essential factors for better embeddings.
Highlights
Word embeddings, or embeddings for short, have been widely used in various natural language processing tasks, such as language modeling (Bengio et al, 2003; Sundermeyer, et al 2012; Adams et al, 2017), syntactic parsing (Grefenstette et al, 2014; Tu et al, 2017) and part-ofspeech tagging (Yang and Eisenstein, 2016)
Learning embeddings from English medical texts, as a hot topic in recent years, has been extensively studied due to the efforts of open datasets, such as UMLS of NLM (Bodenreider, 2004), medical journal abstracts from PubMed (Choi et al, 2016a), and some released clinical data (Finlayson, et al, 2014; Stubbs and Uzuner, 2015)
Referring to the evaluation method for medical concept embeddings proposed in (Choi et al, 2016b) which is based on medical conceptual similarity property, we proposed a method for distantly evaluating the learned embeddings from Chinese clinical records using an additional standard medical terminology dataset
Summary
Embeddings for short, have been widely used in various natural language processing tasks, such as language modeling (Bengio et al, 2003; Sundermeyer, et al 2012; Adams et al, 2017), syntactic parsing (Grefenstette et al, 2014; Tu et al, 2017) and part-ofspeech tagging (Yang and Eisenstein, 2016). Learning embeddings from English medical texts, as a hot topic in recent years, has been extensively studied due to the efforts of open datasets, such as UMLS of NLM (Bodenreider, 2004), medical journal abstracts from PubMed (Choi et al, 2016a), and some released clinical data (Finlayson, et al, 2014; Stubbs and Uzuner, 2015). These datasets have been widely used as gold standards by the biomedical natural language processing domain for learning embeddings (De Vine et al, 2014; Choi et al, 2016b). The learned embeddings from Chinese clinical records are not good enough
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.