Abstract

Abstract. Mobile mapping systems (MMS) can capture dense point-clouds of urban scenes. For visualizing realistic scenes using point-clouds, RGB colors have to be added to point-clouds. To generate colored point-clouds in a post-process, each point is projected onto camera images and a RGB color is copied to the point at the projected position. However, incorrect colors are often added to point-clouds because of the misalignment of laser scanners, the calibration errors of cameras and laser scanners, or the failure of GPS acquisition. In this paper, we propose a new method to correct RGB colors of point-clouds captured by a MMS. In our method, RGB colors of a point-cloud are corrected by comparing intensity images and RGB images. However, since a MMS outputs sparse and anisotropic point-clouds, regular images cannot be obtained from intensities of points. Therefore, we convert a point-cloud into a mesh model and project triangle faces onto image space, on which regular lattices are defined. Then we extract edge features from intensity images and RGB images, and detect their correspondences. In our experiments, our method worked very well for correcting RGB colors of point-clouds captured by a MMS.

Highlights

  • A mobile mapping system (MMS) can be used to capture pointclouds of urban scenes

  • A MMS is a vehicle on which laser scanners, digital cameras, GPS, and IMU are mounted (Figure 1)

  • Intensity values represent the strength of reflected laser beams, and GPS times indicate when points were captured

Read more

Summary

INTRODUCTION

A mobile mapping system (MMS) can be used to capture pointclouds of urban scenes. A MMS is a vehicle on which laser scanners, digital cameras, GPS, and IMU are mounted (Figure 1). In this system, the positions and attitudes of cameras and laser scanners are represented relative to the coordinate system defined on the vehicle. For correcting colors of point-clouds, some researchers calculated correspondences between camera images and range images (Herrer et al, 2011; Stamos and Allen, 2000; Scaramuzza et al, 2007; Unnikrishnan and Hevert, 2005; Viora et al, 1997; Zhang and Pless, 2004) These methods extract feature points, such as corners of planar regions, from range images and RGB images, and compared them. We extract edge features from intensity images and RGB images, and correct RGB colors of points by detecting correspondences of features in the both images

Converting Point-Clouds into Mesh Model
Projection of Triangles
Edge Detection from Intensity Images
Edge Detection from RGB Images
Matching Edge Features
Segmentation of Corresponding Points
Projective Transformation
EXPERIMENTAL RESULTS
CONCLUTION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call