Abstract

Abstract. The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. We propose a novel association mechanism that enables information transfer between two 3D representations: point clouds and meshes. The association mechanism can be used in a two-fold manner: (i) feature transfer to stabilize semantic segmentation of one representation with features from the other representation and (ii) label transfer to achieve the semantic annotation of both representations. We claim that point clouds are an intermediate product whereas meshes are a final user product that jointly provides geometrical and textural information. For this reason, we opt for semantic mesh segmentation in the first place. We apply an off-the-shelf PointNet++ to a textured urban triangle mesh as generated from LiDAR and oblique imagery. For each face within a mesh, a feature vector is computed and optionally extended by inherent LiDAR features as provided by the sensor (e.g. intensity). The feature vector extension is accomplished with the proposed association mechanism. By these means, we leverage inherent features from both data representations for the semantic mesh segmentation (multi-modality). We achieve an overall accuracy of 86:40% on the face-level on a dedicated test mesh. Neglecting LiDAR-inherent features in the per-face feature vectors decreases mean intersection over union by ∼2%. Leveraging our association mechanism, we transfer predicted mesh labels to the LiDAR point cloud at a stroke. To this end, we semantically segment the point cloud by implicit usage of geometric and textural mesh features. The semantic point cloud segmentation achieves an overall accuracy close to 84% on the point-level for both feature vector compositions.

Highlights

  • The past decade has shown that 3D data acquisition and data processing has increasingly become feasible and important in the domain of photogrammetry and remote sensing

  • We focus our work on meshes and investigate the semantic segmentation of textured meshes in urban areas as generated from LiDAR data and oblique imagery

  • In subsection 3.3 and subsection 3.4, we describe in detail the association of LiDAR points and mesh faces along with the respective particular challenges due to the mentioned discrepancies between the mesh and the LiDAR point cloud

Read more

Summary

Introduction

The past decade has shown that 3D data acquisition and data processing has increasingly become feasible and important in the domain of photogrammetry and remote sensing. Common representations for 3D data are point clouds, volumetric representations, projected views (i.e. RGB-D images or renderings), and meshes. Textured meshes as generated from LiDAR point clouds and imagery have some favorable characteristics. Meshes facilitate data fusion by utilizing LiDAR points and Multi-View Stereo (MVS) points for the geometric reconstruction while leveraging high-resolution imagery for texturing (hybrid data storage). Point clouds will be filtered in such a way that only geometrically relevant points are kept. This embraces noise filtering and filtering of points that can be approximated by a face (e.g. points on planar surfaces). Georeferencing issues of LiDAR data and imagery will cause discrepancies between point clouds and meshes, too. Meshes are surface descriptions that cannot handle multi-target capability like

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.