Abstract

The enrichment of the point clouds with colour images improves the visualisation of the data as well as the segmentation and recognition processes. Coloured point clouds are becoming increasingly common, however, the colour they display is not always as expected. Errors in the colouring of point clouds acquired with Mobile Laser Scanning are due to perspective in the camera image, different resolution or poor calibration between the LiDAR sensor and the image sensor. The consequences of these errors are noticeable in elements captured in images, but not in point clouds, such as the sky. This paper focuses on the correction of the sky-coloured points, without resorting to the images that were initially used to colour the whole point cloud. The proposed method consists of three stages. First the region of interest where the erroneously coloured points are accumulated, is selected. Second, the sky-coloured points are detected by calculating the colour distance in the Lab colour space to a sample of the sky-colour. And third, the colour of the sky-coloured detected points is restored from the colour of the nearby points. The method is tested in ten real case studies with their corresponding point clouds from urban and rural areas. In two case studies, sky-coloured points were assigned manually and the remaining eight case studies, the sky-coloured points are derived from the acquisition errors. The algorithm for sky-coloured points detection obtained an average F1-score of 94.7%. The results show a correct reassignment of colour, texture, and patterns, while improving the point cloud visualisation.

Highlights

  • In recent decades 3D point cloud captured through LIDAR (Light Detection and Ranging) have attracted the interest of many areas, for instance autonomous driving, intelligent transportation systems, land administration, robotics, urban environment, archaeology, and archi­ tecture, since it provides a fast way to acquire real world

  • A common strategy uses RGB image captured by camera, as source of visual meaning to complete geometric information extracted from Mobile Mapping System (MMS) 3D point cloud [3,4,5]

  • The result presents variations in colour and texture reassigned, which are related to the geometry and colour of the nearest no sky-coloured points

Read more

Summary

Introduction

In recent decades 3D point cloud captured through LIDAR (Light Detection and Ranging) have attracted the interest of many areas, for instance autonomous driving, intelligent transportation systems, land administration, robotics, urban environment, archaeology, and archi­ tecture, since it provides a fast way to acquire real world. MLS (Mobile Laser Scanner) technology provides an accurate dataset composed of Cartesian coordinates (x, y, z) along with light reflectivity [1], in which each point in the point cloud corresponds to a precise 3D location of the environment surfaces. A common strategy uses RGB image captured by camera, as source of visual meaning to complete geometric information extracted from Mobile Mapping System (MMS) 3D point cloud [3,4,5]. 3D point cloud and sensor image are independent sources of data treated with quite different technology and stored separately. 3D point cloud needs to be saved along with sensor images in order to extract meaning at any time, and to divert the processing from one to another source of information for different purposes introducing computational cost. A natural step forward from that strategy is to use colourised 3D point cloud in which the data for each point are Cartesian coordinates and RGB values [5,7]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call