Abstract

LiDAR-based 3D object detection is an important task for autonomous driving and current approaches suffer from sparse and partial point clouds caused by distant and occluded objects. In this paper, we propose a novel two-stage framework, namely PC-RGNN, which deals with these challenges by two specific solutions. On the one hand, we introduce a point cloud completion module to recover high-quality proposals of dense points and entire view with original structures preserved. On the other hand, a graph neural network module, is designed, which comprehensively captures relations among points by the local-global attention mechanism as well as the multi-scale graph based context aggregation and substantially strengthens encoded features. Extensive experiments on the KITTI benchmark show that the proposed approach outperforms the previous state-of-the-art baselines by remarkable margins, highlighting its effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call