Abstract

This paper presents a series of experiments in collective social robotics, spanning more than 10 years, with the long-term aim of building embodied models of (aspects of) cultural evolution. Initial experiments demonstrated the emergence of behavioural traditions in a group of social robots programmed to imitate each other’s behaviours (we call these Copybots). These experiments show that the noisy (i.e. less than perfect fidelity) imitation that comes for free with real physical robots gives rise naturally to variation in social learning. More recent experimental work extends the robots’ cognitive capabilities with simulation-based internal models, equipping them with a simple artificial theory of mind. With this extended capability we explore, in our current work, social learning not via imitation but robot–robot storytelling, in an effort to model this very human mode of cultural transmission. In this paper, we give an account of the methods and inspiration for these experiments, the experiments and their results, and an outline of possible directions for this programme of research. It is our hope that this paper stimulates not only discussion but suggestions for hypotheses to test with the Storybots.This article is part of a discussion meeting issue ‘The emergence of collective knowledge and cumulative culture in animals, humans and machines’.

Highlights

  • In this paper, we describe two sets of experiments with small groups of real robots, conducted over the course of more than 10 years, in the Bristol Robotics Lab

  • Using the consequence engine (CE), we aim to explore social learning not via imitation but robot–robot storytelling in an effort to model this very human mode of cultural transmission

  • We see one cluster of 12 closely related memes. This result suggests the intriguing conclusion that forgetting may be a significant collective trait in behavioural evolution [21], and might be related to what is referred to as conformist social learning, in which learners are more prone to act as others do [23]

Read more

Summary

Introduction

We describe two sets of experiments with small groups of real robots, conducted over the course of more than 10 years, in the Bristol Robotics Lab. A second and more recent set of experiments extends our robots’ cognitive capabilities with simulation-based internal models. A simulation-based internal model (literally a robot with a simulation of itself, inside itself ), allows a robot to be able to ask itself ‘what if’ questions This capability has been described as a functional imagination [4], as it enables a robot to ‘imagine’ the consequences of its actions (and— in our implementation—the reaction of others to those actions). In addition to the CE, our ‘Storybots’ are being equipped with the means to communicate via speech, and what Penn et al [10] call the ‘spectacular scaffolding provided by language’ Our method for both sets of experiments is to build a working model or, as we prefer to describe it, an embodied simulation consisting of a group of autonomous robots, in which the robots are programmed with simple behaviours and interact with each other in an artificial arena.

Copybots
Storybots
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.