Abstract

Point cloud, an efficient 3D object representation, plays an indispensable role in autonomous driving technologies, such as object avoidance, localization, and map building. The analysis of point clouds (e.g., 3D segmentation) is essential to exploit the informative value of point clouds for such applications. The main challenge remains to effectively and completely extract high-level point cloud feature representations. To this end, we present a novel multi-task Y-shaped graph neural network to explore 3D point clouds, referred to as MTYGNN. By extending the conventional U-Net, MTYGNN contains two main branches to simultaneously perform classification and segmentation tasks in point clouds. Meanwhile, the classification prediction is fused together with the semantic features as the scene context to make the segmentation task more accurate. Furthermore, we consider the homoscedastic uncertainty of each task to calculate the weights of multiple loss functions to ensure that tasks do not negatively interfere with each other. The proposed MTYGNN is evaluated on popular point cloud datasets in traffic scenarios. Experimental results demonstrate that our framework outperforms the state-of-the-art baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call