Abstract
Recently, 3D deep learning technologies require a large amount of supervised 3D point-cloud data to learn statistical models for various ITS-related tasks, e.g. object classification, object detection, object segmentation, etc. However, manually annotating 3D point-cloud data is time-consuming and labor-intensive. Therefore, this paper aims at co-locating 3D objects from mobile LiDAR point clouds without any help of supervised training data. To realize it, we propose a new framework to implement 3D object co-localization for automatically extracting the objects of the same category from different point-cloud scenes. Specifically, to search and exploit the co-information from objects in different point-cloud scenes, we formulate a 3D object co-localization problem as a maximal subgraph matching problem. During the graph construction procedure, to handle the inconsistent representation of objects in different scenes, we propose a multi-scale clustering method to represent objects by a pyramid structure. In addition, because the maximal subgraph matching problem is NP-hard, we propose a stochastic search algorithm to generate the co-localization results. Extensive experiments on the point-cloud data collected by the Reigl VMX450 mobile LiDAR system demonstrate the promising performance of the proposed framework.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Intelligent Transportation Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.