Abstract
On the journey of true autonomous robotics, applying deep reinforcement learning (DRL) techniques to solve complex robotics tasks has been a growing interest in academics and the industry. Currently, numerous simulation frameworks exist for evaluating DRL algorithms with robots, and they usually come with prebuilt tasks or provide tools to create custom environments. Among these, one of the highly sought approaches is using Robot Operating System (ROS) based DRL frameworks for simulation and deployment in the real world. The current ROS-based DRL simulation frameworks like openai_ros or Gym-gazebo provide a framework for creating environments; however, they do not support training with vectorised environments for speeding up the training process and parallel simulations for testing and evaluating meta-learning, multi-task learning and transfer learning approaches. Therefore, we present MultiROS, a 3D robotic simulation framework with a collection of prebuilt environments for deep reinforcement learning (DRL) research to overcome these limitations. This package interfaces with the Gazebo robotic simulator using ROS and provides a modular structure to create ROS-based RL environments. Unlike the others, MultiROS provides support for training with multiple environments in parallel and simultaneously accessing data from each simulation. Furthermore, since MultiROS uses the popular OpenAI Gym interface, it is compatible with most OpenAI Gym based reinforcement learning algorithms that use third-party python frameworks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have