Abstract

The intertwined processes of learning and evolution in complex environmental niches have resulted in a remarkable diversity of morphological forms. Moreover, many aspects of animal intelligence are deeply embodied in these evolved morphologies. However, the principles governing relations between environmental complexity, evolved morphology, and the learnability of intelligent control, remain elusive, because performing large-scale in silico experiments on evolution and learning is challenging. Here, we introduce Deep Evolutionary Reinforcement Learning (DERL): a computational framework which can evolve diverse agent morphologies to learn challenging locomotion and manipulation tasks in complex environments. Leveraging DERL we demonstrate several relations between environmental complexity, morphological intelligence and the learnability of control. First, environmental complexity fosters the evolution of morphological intelligence as quantified by the ability of a morphology to facilitate the learning of novel tasks. Second, we demonstrate a morphological Baldwin effect i.e., in our simulations evolution rapidly selects morphologies that learn faster, thereby enabling behaviors learned late in the lifetime of early ancestors to be expressed early in the descendants lifetime. Third, we suggest a mechanistic basis for the above relationships through the evolution of morphologies that are more physically stable and energy efficient, and can therefore facilitate learning and control.

Highlights

  • The intertwined processes of learning and evolution in complex environmental niches have resulted in a remarkable diversity of morphological forms

  • A prerequisite for realizing this goal is the ability to simultaneously scale the creation of embodied agents across 3 axes of complexity: environmental, morphological, and control without using the abovementioned heuristics to speed up fitness evaluation. To address this challenging requirement we propose Deep Evolutionary Reinforcement Learning (DERL) (Fig. 1a), a conceptually simple computational framework that operates by mimicking the intertwined processes of Darwinian evolution over generations to search over morphologies, and reinforcement learning within a lifetime for learning intelligent behavior from low level egocentric sensory information

  • A key component of the DERL framework is to use distributed asynchronous evolutionary search for parallelizing computations underlying learning, thereby allowing us to leverage the scaling of computation and models that has been so successful in other fields of AI6,25–27 and bring it bear on the field of evolutionary robotics

Read more

Summary

Introduction

The intertwined processes of learning and evolution in complex environmental niches have resulted in a remarkable diversity of morphological forms. The creation of artificial embodied agents[9,10] with well-adapted morphologies that can learn control tasks in diverse, complex environments is challenging because of the twin difficulties of (1) searching through a combinatorially large number of possible morphologies, and (2) the computational time required to evaluate fitness through lifetime learning. A prerequisite for realizing this goal is the ability to simultaneously scale the creation of embodied agents across 3 axes of complexity: environmental, morphological, and control without using the abovementioned heuristics to speed up fitness evaluation To address this challenging requirement we propose Deep Evolutionary Reinforcement Learning (DERL) (Fig. 1a), a conceptually simple computational framework that operates by mimicking the intertwined processes of Darwinian evolution over generations to search over morphologies, and reinforcement learning within a lifetime for learning intelligent behavior from low level egocentric sensory information. We discover a mechanistic basis for the first two results through the evolution of morphologies that are more physically stable and energy efficient

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call