Abstract

Word embedding is important in natural language processing, and word2vec is known as a representative algorithm. However, word2vec and many other dictionary-based word embedding algorithms create word vectors only for words that appear in the training data, ignoring morphological features of these words. The FastText algorithm was previously proposed to solve this problem: it creates a word vector from subword vectors, making it possible to create word embeddings even for words never seen during the training. Because of morphological features, FastText is strong in syntactic tasks but weak in semantic tasks, compared with word2vec. In this paper, we propose a method of improving FastText by using the inverse document frequency of subwords. Our approach is intended to overcome the weakness of FastText in semantic tasks. According to our experiments, the proposed method shows improved results in semantic tests with a little loss in syntactic tests. Our method can be applied to any word embedding algorithm that uses subwords. We additionally tested probabilistic FastText, an algorithm designed to distinguish multiple-meaning words, by adding the inverse document frequency, and the results confirmed an improved performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call