Optical information synthesis, which fuses LiDAR and optical cameras, has the potential for highly detailed 3D representations. However, due to the disparity of information density between point clouds and images, conventional matching methods based on points often lose significant information. To address this issue, we propose a regional matching method to bridge the differences in information density between point clouds and images. In detail, fine semantic regions are extracted from images by analyzing their gradients. Simultaneously, point clouds are transformed into meshes, where each facet corresponds to a coarse semantic region. Extrinsic matrices are used to unify the point cloud coordinate system with the image coordinate system. The mesh is further subdivided based on the guidance of image texture information to create regional matching units. Within each matching unit, the information density of the point cloud and the image is carefully balanced at a semantic level. The texture features of the image are well preserved in the transformed mesh structure. Consequently, the proposed texture synthesis method significantly enhances the overall quality and realism of the 3D imaging.
Read full abstract