Abstract

Synonyms and homonyms appear in all natural languages. We analyze their evolution within the framework of the signaling game. Agents in our model use reinforcement learning, where probabilities of selection of a communicated word or of its interpretation depend on weights equal to the number of accumulated successful communications. When the probabilities increase linearly with weights, synonyms appear to be very stable and homonyms decline relatively fast. Such behavior seems to be at odds with linguistic observations. A better agreement is obtained when probabilities increase faster than linearly with weights. Our results may suggest that a certain positive feedback, the so-called Metcalfe’s Law, possibly drives some linguistic processes. Evolution of synonyms and homonyms in our model can be approximately described using a certain nonlinear urn model.

Highlights

  • The evolution and structure of language are often analyzed using computational modeling [1,2,3]

  • We suggest that the presence of synonyms and homonyms in natural languages may give us some valuable clues as to the nature of the mechanisms that drive linguistic processes

  • Within the framework of the signaling game, we argue that the reinforcement learning should operate in the super-linear regime (α > 1) with probabilities of selections increasing faster than linearly with the accumulated weights

Read more

Summary

Introduction

The evolution and structure of language are often analyzed using computational modeling [1,2,3]. A appealing research paradigm is inspired by the idea that language might have spontaneously appeared in a population of communicating individuals, possibly with some adaptive features [4] This standpoint prompted numerous analyses of multi-agent models, which mimic such communication and try to infer the properties of the emerging language and its possible further evolution [5,6,7]. In certain models of this kind, language emergence and evolution are studied using the signaling game [8], where communicating agents must decide which signal (i.e., a word) to send or how to interpret the signal they have received. Language that emerges in such models may provide a unique form-meaning mapping (in a signaling game terminology, it is a signaling system), but there are other possibilities. We briefly describe a certain urn model that will help us to understand some aspects of our multi-agent signaling game

Nonlinear Urn Model
Multi-Agent Signaling Game with Reinforcement Learning
Synonyms
Homonyms
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call