Abstract

Authors of Word2Vec claimed that their technology could solve the word analogy problem using the vector transformation in the introduced vector space. However, the practice demonstrates that it is not always true. In this paper, we investigate several Word2Vec and FastText model trained for the Russian language and find out reasons of such inconsistency. We found out that different types of words are demonstrating different behavior in the semantic space. FastText vectors are tending to find phonological analogies, while Word2Vec vectors are better in finding relations in geographical proper names. However, we found out that just four out of fifteen selected domains are demonstrating accuracy more that 0.8. We also draw a conclusion that in a common case, the task of word analogies could not be solved using a random word pair taken from two investigated categories. Our experiments have demonstrated that in some cases the length of the vectors could differ more than twice. Calculation of an average vector leads to a better solution here since it closer to more vectors.

Highlights

  • The basic point for the semantic space of natural language words was the paper [1] published in 2003

  • The FastText model was created to work with character n-grams and learn grammatical features of the language

  • We found several reasons why the vector transformation does not work on some categories of word analogies

Read more

Summary

Introduction

The basic point for the semantic space of natural language words was the paper [1] published in 2003 It introduced fixed-size vectors (embeddings) generated by a neural network using statistical information about the word context. This concept was developed in [2] where the author demonstrated that such pre-trained vectors can be useful for solution of different problems of natural language processing. The early experiments demonstrated that another favorite example, countries and their capitals, does not work correctly for any case – country, capital and pre-trained language model The accuracy of this analogy was pretty high but not enough to state that vector arithmetic works properly. The description of contextualized words embedding could be found in paper [9]

Formal Statement of the Problem of Word Analogies
Review of Affine Transformation Methods for the Problem of Word Analogies
Used Data Sets
Evaluation
Data Analysis
Discussion and Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call