Abstract

Real world visual navigation requires robots to operate in unfamiliar, human-occupied dynamic environments. Navigation around humans is especially difficult because it requires anticipating their future motion, which can be quite challenging. We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans based only on monocular, first-person RGB images. Our approach is enabled by our novel data-generation tool, HumANav, that allows for photorealistic renderings of indoor environment scenes with humans in them, which are then used to train the perception module entirely in simulation. Through simulations and experiments on a mobile robot, we demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion, generalize to previously unseen environments and human behaviors, and transfer directly from simulation to reality. Videos describing our approach and experiments, as well as a demo of HumANav are available on the project website. <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup>

Highlights

  • A UTONOMOUS robot navigation has the potential to enable many critical robot applications, from service robots that deliver food and medicine, to logistics and search and rescue missions

  • Our approach is enabled by a novel data-generation tool which we have designed, the Human Active Navigation Data-Generation Tool (HumANav), a photo-realistic rendering engine for images of humans moving in indoor environments

  • We propose LB-WayPtNav-DH, a framework that combines a learning-based perception module and a model-based planning module for autonomous navigation in a priori unknown indoor environments with humans

Read more

Summary

INTRODUCTION

A UTONOMOUS robot navigation has the potential to enable many critical robot applications, from service robots that deliver food and medicine, to logistics and search and rescue missions. Our approach is enabled by a novel data-generation tool which we have designed, the Human Active Navigation Data-Generation Tool (HumANav), a photo-realistic rendering engine for images of humans moving in indoor environments Equipped with this data generation tool, we train a modular architecture that combines a learning-based perception module with a dynamics model-based planning and control module to learn navigation policies entirely in simulation. The key contributions of this paper are: a) HumANav, an active data-generation tool to benchmark visual navigation algorithms around humans; b) a fully automated self-supervised training scheme via MPC that leverages HumANav to generate data for learning navigation policies without requiring any expensive demonstrations by an expert; and c) an autonomous visual navigation method that uses this training data to learn to navigate around humans in unknown indoor environments based only on monocular RGB images, does not require explicit state estimation and trajectory prediction of the human, and performs zero-shot transfer of learned policies from simulation to reality

RELATED WORK
PROBLEM SETUP
MODEL-BASED LEARNING FOR NAVIGATION AROUND HUMANS
Data Generation Procedure
SIMULATION EXPERIMENTS
Results
HARDWARE EXPERIMENTS
Findings
CONCLUSION AND FUTURE WORK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.