3D semantic segmentation is one of the most fundamental problems for 3D scene understanding and has attracted much attention in the field of computer vision. In this paper, we propose an active learning based 3D semantic labeling method for large-scale 3D mesh model generated from images or videos. Taking as input a 3D mesh model reconstructed from the image based 3D modeling system, coupled with the calibrated images, our method outputs a fine 3D semantic mesh model in which each facet is assigned a semantic label. There are three major steps in our framework: 2D semantic segmentation, 2D-3D semantic fusion, and batch image selection. A limited annotation image set is first used to fine-tune a pre-trained semantic segmentation network for obtaining the pixel-wise semantic probability maps. Then all these maps are back-projected into 3D space and fused on the 3D mesh model using Markov Random Field optimization, thus yield a preliminary 3D semantic mesh model and a heat model showing each facet’s confidence. This 3D semantic model is used as a reliable supervisor to select the parts that are not well segmented for manual annotation to boost the performance of the 2D semantic segmentation network, as well as the 3D mesh labeling, in the next iteration. This Training-Fusion-Selection process continues until the label assignment of the 3D mesh model becomes steady. By this means, we significantly reduce the amount for annotation but not the labeling quality of 3D semantic models. Extensive experiments demonstrate the effectiveness and generalization ability of our method on a wide variety of datasets.
Read full abstract