Abstract
Textured 3D mesh is one of the final user products in photogrammetry and remote sensing. However, research on the semantic segmentation of complex urban scenes represented by textured 3D meshes is in its infancy. We present a mesh-based dynamic graph CNN (DGCNN) for the semantic segmentation of textured 3D meshes. To represent each mesh facet, composite input feature vectors are constructed by concatenating the face-inherent features, i.e., XYZ coordinates of the center of gravity (CoG), texture values, and normal vectors. A texture fusion module is embedded into the proposed mesh-based DGCNN to generate high-level semantic features of the high-resolution texture information, which is useful for semantic segmentation. We achieve competitive accuracies when the proposed method is applied to the SUM mesh datasets. The overall accuracy (OA), Kappa coefficient (Kap), mean precision (mP), mean recall (mR), mean F1 score (mF1), and mean intersection over union (mIoU) are 93.3%, 88.7%, 79.6%, 83.0%, 80.7%, and 69.6%, respectively. In particular, the OA, mean class accuracy (mAcc), mIoU, and mF1 increase by 0.3%, 12.4%, 3.4%, and 6.9%, respectively, compared to the state-of-the-art method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Geoscience and Remote Sensing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.