Abstract

Online evolution gives robots the capacity to learn new tasks and to adapt to changing environmental conditions during task execution. Previous approaches to online evolution of neural controllers are typically limited to the optimisation of weights in networks with a prespecified, fixed topology. In this article, we propose a novel approach to online learning in groups of autonomous robots called odNEAT. odNEAT is a distributed and decentralised neuroevolution algorithm that evolves both weights and network topology. We demonstrate odNEAT in three multirobot tasks: aggregation, integrated navigation and obstacle avoidance, and phototaxis. Results show that odNEAT approximates the performance of rtNEAT, an efficient centralised method, and outperforms IM-(μ + 1), a decentralised neuroevolution algorithm. Compared with rtNEAT and IM-(μ + 1), odNEAT's evolutionary dynamics lead to the synthesis of less complex neural controllers with superior generalisation capabilities. We show that robots executing odNEAT can display a high degree of fault tolerance as they are able to adapt and learn new behaviours in the presence of faults. We conclude with a series of ablation studies to analyse the impact of each algorithmic component on performance.

Highlights

  • Evolutionary computation has been widely studied and applied as a means to automate the design of robotic systems (Floreano and Keller, 2010)

  • We present a novel algorithm for online evolution of artificial neural networks (ANNs)-based controllers in multirobot systems called Online Distributed NeuroEvolution of Augmenting Topologies. odNEAT is completely decentralised and can be distributed across multiple robots. odNEAT is characterised by maintaining genetic diversity, protecting topological innovations, keeping track of poor solutions to the current task in a tabu list, and exploiting the exchange of genetic information between robots for faster adaptation

  • We have presented a novel distributed and decentralised neuroevolution algorithm called odNEAT for online learning in groups of robots. odNEAT implements the online evolutionary process according to a physically distributed island model

Read more

Summary

Introduction

Evolutionary computation has been widely studied and applied as a means to automate the design of robotic systems (Floreano and Keller, 2010). Controllers are typically based on artificial neural networks (ANNs). We review the main approaches in the literature for online evolution of ANN-based controllers in multirobot systems, and the main characteristics of NEAT, rtNEAT, and IM-(μ + 1). The evolutionary process takes place when robots meet and exchange genetic information. Robots that receive gene transmissions incorporate this genetic material into their genome with a probability inversely proportional to their fitness. This way, selection and variation operators are implemented in a distributed manner through the interactions between robots

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call