Vehicle-borne mobile mapping systems (MMSs) have been proven as an efficient means of photogrammetry and remote sensing, as they simultaneously acquire panoramic images, point clouds, and positional information along the collection route from a ground-based perspective. Obtaining accurate matching results between point clouds and images is a key issue in data application from vehicle-borne MMSs. Traditional matching methods, such as point cloud projection, depth map generation, and point cloud coloring, are significantly affected by the processing methods of point clouds and matching logic. In this study, we propose a method for generating matching relationships based on panoramic images, utilizing the raw point cloud map, a series of trajectory points, and the corresponding panoramic images acquired using a vehicle-borne MMS as input data. Through a point-cloud-processing workflow, irrelevant points in the point cloud map are removed, and the point cloud scenes corresponding to the trajectory points are extracted. A collinear model based on spherical projection is employed during the matching process to project the point cloud scenes to the panoramic images. An algorithm for vectorial angle selection is also designed to address filtering out the occluded point cloud projections during the matching process, generating a series of matching results between point clouds and panoramic images corresponding to the trajectory points. Experimental verification indicates that the method generates matching results with an average pixel error of approximately 2.82 pixels, and an average positional error of approximately 4 cm, thus demonstrating efficient processing. This method is suitable for the data fusion of panoramic images and point clouds acquired using vehicle-borne MMSs in road scenes, provides support for various algorithms based on visual features, and has promising applications in fields such as navigation, positioning, surveying, and mapping.
Read full abstract