Abstract

Mathematical models used to explain the power-law distribution of word frequencies observed in natural languages - Zipf's law - generally assume that symbols and words occur independently, i.e., they do not interact. Here we show that when interaction is taken into account by allowing the words to compete amongst themselves for space in the memory of the users, the resulting word frequency distribution is best described by an exponential, rather than by a power-law. The implications of the failure to derive Zipf's law under more realistic assumptions are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call