Abstract

This paper presents neuro-augmented vision for evolutionary robotics (NAVER), which aims to address the two biggest challenges in camera-equipped robot evolutionary controllers. The first challenge is that camera images typically require many inputs from the controller, which greatly increases the complexity of optimising the search space. The second challenge is that evolutionary controllers often cannot bridge the reality gap between simulation and the real world. This method utilises a variational autoencoder to compress the camera image into smaller input vectors that are easier to manage, while still retaining the relevant information of the original image. Automatic encoders are also used to remove unnecessary details from real-world images, in order to better align with images generated by simple visual simulators. NAVER is used to evolve the controller of a robot, which only uses camera inputs to navigate the maze based on visual cues and avoid collisions. The experimental results indicate that the controller evolved in simulation and transferred to the physical robot, where it successfully performed the same navigation task. The controller can navigate the maze using only visual information. The controller responds to visual cues and changes its behaviour accordingly. NAVER has shown great potential as it has successfully completed (so far) the most complex vision-based task controller in evolutionary robotics literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call