Abstract
Abstract Linguistic discourses treated as maximum entropy systems of words according to prescriptions of algorithmic information theory (Kolmogorov, Chaitin, & Zurek) are shown to give a natural explanation of Zipf's law with quantitative rigor. The pattern of word frequencies in discourse naturally leads to a distinction between two classes of words: content words (c‐words) and service words (s‐words). A unified entropy model for the two classes of words leads to word frequency distribution functions in accordance with data. The model draws on principles of classical and quantum statistical mechanics and emphasises general principles of classifying, counting and optimising their related costs for coding of sequential symbols, under certain obvious constraints; hence it is likely to be valid for diverse complex systems of nature. Unlike other models of Zipf s law, which require exponential distribution of word lengths, entropy models based on words as primary symbols do not restrict the word length distri...
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.