Abstract

Abstract Moving object segmentation is fundamental for various downstream tasks in robotics and autonomous driving, providing crucial information for them. Effectively extracting spatial-temporal information from consecutive frames and addressing the scarcity of dataset is paramount for learning-based 3D LiDAR Moving Object Segmentation (LIDAR-MOS). In this work, we propose a novel deep neural network based on Vision Transformers (ViTs) to tackle this problem. We first validate the feasibility of Transformer networks for this task, offering an alternative to CNNs. Specifically, we utilize a dual-branch structure based on range-image data to extract spatial-temporal information from consecutive frames and fuse it using a motion-guided attention mechanism. Furthermore, we employ the ViT as the backbone, keeping its architecture unchanged from what is used for RGB images. This enables us to leverage pre-trained models from RGB images to improve results, addressing the issue of limited LIDAR point cloud data, which is cheaper compared to acquiring and annotating point cloud data. We validate the effectiveness of our approach on the LIDAR-MOS benchmark of SemanticKitti and achieve comparable results to methods that use CNNs on range image data. The source code and trained models are available at https://github.com/mafangniu/MOSViT.git.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call