Abstract

We introduce a novel co-design method for autonomous moving agents' shape attributes and locomotion by combining deep reinforcement learning and evolution with user control. Our main inspiration comes from evolution, which has led to wide variability and adaptation in Nature and has significantly improved design and behavior simultaneously. Our method takes an input agent with optional user-defined constraints, such as leg parts that should not evolve or are only within the allowed ranges of changes. It uses physics-based simulation to determine its locomotion and finds a behavior policy for the input design that is used as a baseline for comparison. The agent is randomly modified within the allowed ranges, creating a new generation of several hundred agents. The generation is trained by transferring the previous policy, which significantly speeds up the training. The best-performing agents are selected, and a new generation is formed using their crossover and mutations. The next generations are then trained until satisfactory results are reached. We show a wide variety of evolved agents, and our results show that even with only 10 the overall performance of the evolved agents improves by 50 experiments' performance will improve even more to 150 structures, and it does not require considerable computation resources as it works on a single GPU and provides results by training thousands of agents within 30 minutes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.