Abstract

This paper approaches the syntactic alignment of a robot team by means of dialogic language games and by applying online probabilistic reinforcement learning algorithms. The syntactic alignment is studied under two different configurations of the robot team: (a) when the team is formed exclusively by robots and (b) when a human is included in the team, in which case the human is endowed with a natural language to communicate with the other members of the team. For the two above mentioned cases, we are interested in the analysis of the convergence of the team to an optimal common language. The main contribution of the paper is the application of stochastic regular grammars, with learning capability, to generate the robots team’s language. Apart from the analysis of the convergence to a common language in the case of a fully autonomous robot team without human intervention we are also particularly interested in analyzing how the syntactic alignment of the robot team can be influenced or mediated by humans. The paper is organized as follows: first, we describe the syntactic language games, in particular the type of grammar and syntactic rules of the robots team’s language and the dynamic process of the language games which are based on dialogic communicative acts and a reinforcement learning policy that allows the robot team to converge to a common language. Afterwards, the experimental results are presented and discussed. The experimental work has been organized around the linguistic description of visual scenes of the blocks world type. The general conclusion of our experiments can be briefly stated in this way: “for the fully autonomous case (only robots) the final emergent grammar is arbitrary, while in the second case of including a human in the team the final emergent grammar is the one used by the human”.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call