Abstract

In this letter, an object-level matching method was proposed to perform building matching in cross-dimensional remote-sensing data. Object-level matching of buildings is essential pre-work for 3-D shape reconstruction, to further generate productions, such as a digital building model. Optical images and LiDAR point clouds, with rich colors and precise 3-D positioning information, respectively, are mostly used alone for 3-D shape reconstruction. Obviously, it is better to combine both the 2-D image and 3-D point cloud. However, it is difficult to first perform the cross-dimensional object-level matching (COlM), since there are few descriptors which can unify the 2-D image and 3-D point cloud. To address this issue, a feature transformation framework was proposed. First, a cross-dimensional encoder module is introduced to unify the descriptor extracted from the optical image and LiDAR point cloud. Second, a spatial occupancy probability descriptor (SOPD) is employed to associate the descriptor extracted from different dimensional data with instinct geometric structure of buildings. Then, the 3-D geometric structure is transformed into a feature vector for matching. For experiments, a cross-dimensional object-level building dataset was collected and labeled to verify our method. It includes cross-view 2-D optical images and LiDAR point clouds for each building from hundreds of buildings. The results show that a high-accuracy COlM was achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call