Most of the modern natural language processing (NLP) techniques are based on the vector space models of language, in which each word is represented by a vector in a high dimensional space. One of the earliest successes was demonstrated by the four-term analogical reasoning task: what is to C as B is to A? The trained word vectors form "parallelograms" representing the quadruple of words in analogy. This discovery in NLP offers us insight into our understanding of human semantic representation of words via analogical reasoning. Despite successful applications of the large-scale language models, it has not been fully understood why such parallelograms emerge by learning through natural language data. As the vector space model is not optimized to form parallelograms, the key structure related to geometric shapes of word vectors is expected to be in the data, rather than the models. In the present article, we test our hypothesis that such parallelogram arrangement of word vectors readily exists in the co-occurrence statistics of language. Our approach focuses more on the data itself, and it is different from the existing theoretical approach trying to find the mechanism of parallelogram formation in the algorithms and/or vector arithmetic operations on word vectors. First, our analysis suggested that analogical reasoning is possible by decomposition of the bigram co-occurrence matrix. Second, we demonstrated the formation of a parallelepiped, a more structured geometric object than a parallelogram, by creating a small artificial corpus and its word vectors. With these results, we propose a refined form of distributional hypothesis pointing out an isomorphism between a sort of symmetry or exchangeability and word co-occurrence statistics.
Read full abstract