Abstract

Urban meshes semantic segmentation is essential for comprehending the 3D real-world environments, as it plays a vital role across various application domains, including digital twins, 3D navigation, and smart cities. Nevertheless, the inherent topological complexities of urban meshes impede the precise representation of dependencies and local structures, yielding compromised segmentation accuracy, especially for small or irregularly-shaped objects like vegetation and vehicles. To address this challenge, we introduce UrbanSegNet, a novel end-to-end model incorporating diffusion perceptron blocks and a vertex spatial attention mechanism. The diffusion perceptron blocks can dynamically enlarge receptive fields to capture features from local to completely global, enabling effective representation of urban meshes using multi-scale features and increasing small and irregularly-shaped object segmentation accuracy. The vertex spatial attention mechanism extracts the internal correlations within urban meshes to enhance semantic segmentation performance. Besides, a tailored loss function is designed to enhance overall performance further. Comprehensive experiments on two datasets demonstrate that the proposed method outperforms the state-of-the-art models in terms of mean F1 score, recall, and mean intersection over union (mIoU). The experimental results also demonstrate that UrbanSegNet achieves higher segmentation accuracy on vehicles and high vegetation compared to the state-of-the-art methods, highlighting the superiority of our proposed model in extracting features of small and irregularly-shaped objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call