Abstract

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.

Highlights

  • Scene classification is important for a wide range of applications and an open field in research concerning runtime, scalability and accuracy

  • Accurate 3D point clouds are essential for robust scene classification for state-of-the-art methods

  • We focus on the fast generation of 3D point clouds from image sets which are suitable for semantic scene classification

Read more

Summary

INTRODUCTION

Scene classification is important for a wide range of applications and an open field in research concerning runtime, scalability and accuracy. Accurate 3D point clouds are essential for robust scene classification for state-of-the-art methods. The processing of such large point clouds on single PCs can take a couple of days (Ummenhofer and Brox, 2015) For practical applications such as scene classification, there is basically no need for a computationally complex fusion to obtain accurate 3D point clouds. (Guo et al, 2011) present an urban scene classification on airborne LiDAR and multispectral imagery studying the relevance of different features of multi-source data. In this paper we present a robust and efficient analytical pipeline for automatic urban scene classification based on point clouds from disparity maps, which is adapted to utilize the additional probability information for the points to improve the results. The double-blind peer-review was conducted on the basis of the full paper

GENERATION OF 3D POINT CLOUDS
FUSION OF 3D POINT CLOUDS
Occupancy Grids
SEMANTIC CLASSIFICATION
Patch-wise Scheme
Classification
EXPERIMENTS
Findings
CONCLUSIONS AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call