Abstract

Word embedding refers to mapping words or phrases to vectors of real numbers. This is the precondition of text classification, sentiment analysis and text mining in natural language processing using deep neural networks. Taking English as an example, most of current word embedding algorithms obtain the vectors by learning the distribution of word’s prefix, suffix, etyma and the entire word itself. Unlike English, Chinese words are composed of components and strokes. Furthermore, those components and strokes usually hint the meaning of the word. Thus, components and strokes distribution must be fully considered and learnt when one’s doing Chinese word embedding. In this paper, we propose a component-based cascade n-gram (CBC n-gram) model and a stroke-based cascade n-gram (SBC n-gram) model. By overlaying component and stroke n-gram vectors on word vectors, we successfully improve Chinese word embedding so as to preserve as more morphological information as possible at different granularity levels. We evaluate our models on word similarity, word analogy and text classification tasks using wordsim-240, wordsim-296, Chinese word analogy dataset and Fudan Corpus, respectively. Experimental and comparison results show that our models outperform other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call