Abstract

Soft robotsoffer a host of benefits over traditional rigid robots, including inherent compliance that lets them passively adapt to variable environments and operate safely around humans and fragile objects. However, that same compliance makes it hard to use model-based methods in planning tasks requiring high precision or complex actuation sequences. Reinforcement learning (RL) can potentially find effective control policies, but training RL using physical soft robots is often infeasible, and training using simulations has had a high barrier to adoption. To accelerate research in control and RL for soft robotic systems, we introduce SoMoGym ( <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">So</u> ft <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Mo</u> tion <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Gym</u> ), a software toolkit that facilitates training and evaluating controllers for continuum robots. SoMoGym provides a set of benchmark tasks in which soft robots interact with various objects and environments. It allows evaluation of performance on these tasks for controllers of interest, and enables the use of RL to generate new controllers. Custom environments and robots can likewise be added easily. We provide and evaluate baseline RL policies for each of the benchmark tasks. These results show that SoMoGym enables the use of RL for continuum robots, a class of robots not covered by existing benchmarks, giving them the capability to autonomously solve tasks that were previously unattainable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call