Abstract

The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed.

Highlights

  • The scientific interest in the information-content hidden in the frequency statistics of words and letters in a text goes at least back to Islamic scholars in the ninth century

  • It is here shown that this maximum entropy prediction describes a text written in Chinese characters

  • Maximum Entropy, Word-Frequency, Chinese Characters, Multiple Meanings twentieth century when it was discovered that the words in a text typically have a broad “fattailed” shape, which often can be well approximated with a power law over a large range [2,3,4,5]

Read more

Summary

Introduction

The scientific interest in the information-content hidden in the frequency statistics of words and letters in a text goes at least back to Islamic scholars in the ninth century. In the middle of the twentieth century Simon in [11] instead suggested that since quite a few completely different systems seemed to follow Zipf’s law in their corresponding frequency distributions, the explanation of the law must be more general and stochastic in nature and independent of any specific information of the language itself. Instead he proposed a random stochastic growth model for a book written one word at a time from beginning to end. The monkey book is definitely translational invariant, but its properties are quite unrealistic and different from a real text [26]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.