Abstract

We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labeling this kind of data is important for tasks such as environmental modeling, object classification, and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on four real-world photogrammetry datasets that were generated with Pix4Dmapper, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than three minutes on a desktop computer. We also demonstrate that our approach can be used to generate accurate Digital Terrain Models, outperforming approaches based on more simple heuristics such as Maximally Stable Extremal Regions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call