Abstract

Semantic segmentation of 3D scenes is one of the most important tasks in the field of computer vision and has attracted much attention. In this paper, we propose a novel framework for 3D semantic segmentation of aerial photogrammetry models, which uses orthographic projection to improve efficiency while still ensuring high precision, and can also be applied to multiple types of models (i.e., textured mesh or colored point cloud). In our pipeline, we first obtain RGB images and elevation images from the 3D scene through orthographic projection, then use the image semantic segmentation network to segment these images to obtain pixel-wise semantic predictions, and finally back-project the segmentation results to the 3D model for fusion. Specifically, for the image semantic segmentation model, we design a cross-modality feature aggregation module and a context guidance module based on category features, which assist the network in learning more discriminative features between different objects. For the 2D-3D semantic fusion, we combine the segmentation results of the 2D images with the geometric consistency of the 3D models for joint optimization, which further improves the accuracy of the 3D semantic segmentation. Extensive experiments on two large-scale urban scenes demonstrate the efficiency and feasibility of our algorithm and surpass the current mainstream 3D deep learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call