Abstract

The reconstruction of 3D point clouds from image datasets is a time-consuming task that has been frequently solved by performing photogrammetric techniques on every data source. This work presents an approach to efficiently build large and dense point clouds from co-acquired images. In our case study, the sensors co-acquire visible as well as thermal and multispectral imagery. Hence, RGB point clouds are reconstructed with traditional methods, whereas the rest of the data sources with lower resolution and less identifiable features are projected into the first one, i.e., the most complete and dense. To this end, the mapping process is accelerated using the Graphics Processing Unit (GPU) and multi-threading in the CPU (Central Processing Unit). The accurate colour aggregation in 3D points is guaranteed by taking into account the occlusion of foreground surfaces. Accordingly, our solution is shown to reconstruct much more dense point clouds than notable commercial software (286% on average), e.g., Pix4Dmapper and Agisoft Metashape, in much less time (−70% on average with respect to the best alternative).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call