Accurate static structure reconstruction and segmentation of non-stationary objects is of vital importance for autonomous navigation applications. These applications assume a LiDAR scan to consist of only static structures. In the real world however, LiDAR scans consist of non-stationary dynamic structures — moving and movable objects. Current solutions use segmentation information to isolate and remove moving structures from LiDAR scan. This strategy fails in several important use-cases where segmentation information is not available. In such scenarios, moving objects and objects with high uncertainty in their motion i.e. movable objects, may escape detection. This violates the above assumption. We present MOVES, a novel GAN based adversarial model that segments out moving as well as movable objects in the absence of segmentation information. We achieve this by accurately transforming a dynamic LiDAR scan to its corresponding static scan. This is obtained by replacing dynamic objects and corresponding occlusions with static structures which were occluded by dynamic objects. We leverage corresponding static-dynamic LiDAR pairs. We design a novel discriminator, coupled with a contrastive loss on a smartly selected LiDAR scan triplet. For datasets lacking paired information, we propose MOVES-MMD that integrates Unsupervised Domain Adaptation into the network. We perform rigorous experiments to demonstrate state of the art dynamic to static translation performance on a sparse real world industrial dataset, an urban and a simulated dataset. MOVES also segments out movable and moving objects without using segmentation information. Without utilizing segmentation labels, MOVES performs better than segmentation based navigation baseline in highly dynamic and long LiDAR sequences. The code is available here.
Read full abstract