Abstract

Evolutionary robotics (ER) is a field of research that applies artificial evolution toward the automatic design and synthesis of intelligent robot controllers. The preceding decade saw numerous advances in evolutionary robotics hardware and software systems. However, the sophistication of resulting robot controllers has remained nearly static over this period of time. Here, we make the case that current methods of controller fitness evaluation are primary factors limiting the further development of ER. To address this, we define a form of fitness evaluation that relies on intra-population competition. In this research, complex neural networks were trained to control robots playing a competitive team game. To limit the amount of human bias or know-how injected into the evolving controllers, selection was based on whether controllers won or lost games. The robots relied on video sensing of their environment, and the neural networks required on the order of 150 inputs. This represents an order of magnitude increase in sensor complexity compared to other research in this field. Evolved controllers were tested extensively in real fully-autonomous robots and in simulation. Results and experiments are presented to characterize the training process and the acquisition of controller competency under different evolutionary conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call