Abstract

We propose a stochastic model for the number of different words in a given database which incorporates the dependence on the database size and historical changes. The main feature of our model is the existence of two different classes of words: (i) a finite number of core-words which have higher frequency and do not affect the probability of a new word to be used; and (ii) the remaining virtually infinite number of noncore-words which have lower frequency and once used reduce the probability of a new word to be used in the future. Our model relies on a careful analysis of the google-ngram database of books published in the last centuries and its main consequence is the generalization of Zipf's and Heaps' law to two scaling regimes. We confirm that these generalizations yield the best simple description of the data among generic descriptive models and that the two free parameters depend only on the language but not on the database. From the point of view of our model the main change on historical time scales is the composition of the specific words included in the finite list of core-words, which we observe to decay exponentially in time with a rate of approximately 30 words per year for English.

Highlights

  • Even in our time of big data [1,2,3], there is no indication of a saturation of the vocabulary size with increasing database size

  • In order to clarify whether it is meaningful to estimate a vocabulary size in the limit of infinitely large databases, it is essential to understand the birth and death of words [4,5,6], and the process governing the usage of new words and its dependence on database size

  • Our model is in the same spirit of, but differs from, the simpler versions of Yule’s, Simon’s, Gibrat’s, and preferential attachment growth models [26,27,28,29] because it contains two categories of words and leads to two scaling regimes in the Heaps’ and Zipf’s plots. These findings are supported by a statistical analysis of the Google Ngram database, indicating that the only two free parameters needed in the description of these scalings remain unchanged over centuries and depend only on the language, and that there is a slow change of words belonging to each category

Read more

Summary

INTRODUCTION

Even in our time of big data [1,2,3], there is no indication of a saturation of the vocabulary size (total number of different words) with increasing database size. Our model is in the same spirit of, but differs from, the simpler versions of Yule’s, Simon’s, Gibrat’s, and preferential attachment growth models [26,27,28,29] because it contains two categories of words and leads to two scaling regimes in the Heaps’ and Zipf’s plots These findings are supported by a statistical analysis of the Google Ngram database, indicating that the only two free parameters needed in the description of these scalings remain unchanged over centuries and depend only on the language, and that there is a slow change of words belonging to each category. IV, we investigate dynamical aspects on historical time scales within the framework of our model

DATA ANALYSIS
Zipf’s analysis
Heaps’ analysis
HISTORICAL CHANGES
Findings
DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call