With the continuous development of three-dimensional city modeling, traditional close-range photogrammetry is limited by complex processing procedures and incomplete 3D depth information, making it unable to meet high-precision modeling requirements. In contrast, the integration of light detection and ranging and cameras in mobile measurement systems provides a new and highly effective solution. Currently, integrated mobile measurement systems commonly require cameras, lasers, position and orientation system and inertial measurement units; thus, the hardware cost is relatively expensive, and the system integration is complex. Therefore, in this paper, we propose a ground mobile measurement system only composed of a LiDAR and a GoPro camera, providing a more convenient and reliable way to automatically obtain 3D point cloud data with spectral information. The automatic point cloud coloring based on video images mainly includes four aspects: (1) Establishing models for radial distortion and tangential distortion to correct video images. (2) Establishing a registration method based on normalized Zernike moments to obtain the exterior orientation elements. The error of the result is only 0.5–1 pixel, which is far higher than registration based on a collinearity equation. (3) Establishing relative orientation based on essential matrix decomposition and nonlinear optimization. This involves uniformly using the speeded-up robust features algorithm with distance restriction and random sample consensus to select corresponding points. The vertical parallax of the stereo image pair model is less than one pixel, indicating that the accuracy is high. (4) A point cloud coloring method based on Gaussian distribution with central region restriction is adopted. Only pixels within the central region are considered valid for coloring. Then, the point cloud is colored based on the mean of the Gaussian distribution of the color set. In the colored point cloud, the textures of the buildings are clear, and targets such as windows, grass, trees, and vehicles can be clearly distinguished. Overall, the result meets the accuracy requirements of applications such as tunnel detection, street-view modeling and 3D urban modeling.
Read full abstract