Abstract

Detecting navigable space is a fundamental capability for mobile robots navigating in unknown or unmapped environments. In this work, we treat visual navigable space segmentation as a scene decomposition problem and propose <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">P</b> olyline <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">S</b> egmentation <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">V</b> ariational autoencoder <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Net</b> work (PSV-Net), a representation learning-based framework for learning the navigable space segmentation in a self-supervised manner. Current segmentation techniques heavily rely on fully-supervised learning strategies which demand a large amount of pixel-level annotated images. In this work, we propose a framework leveraging a Variational AutoEncoder (VAE) and an AutoEncoder (AE) to learn a polyline representation that compactly outlines the desired navigable space boundary. Through extensive experiments, we validate that the proposed PSV-Net can learn the visual navigable space with no or few labels, producing an accuracy comparable to fully-supervised state-of-the-art methods that use all available labels. In addition, we show that integrating the proposed navigable space segmentation model with a visual planner can achieve efficient mapless navigation in real environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call