Abstract

We present the scientific outcomes of the 2019 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society. The contest included challenges with large-scale datasets for semantic 3-D reconstruction from satellite images and also semantic 3-D point cloud classification from airborne LiDAR. 3-D reconstruction results are discussed separately in Part-A. In this Part-B, we report the results of the two best-performing approaches for 3-D point cloud classification. Both are deep learning methods that improve upon the PointSIFT model with mechanisms to combine multiscale features and task-specific postprocessing to refine model outputs.

Highlights

  • A CURRENT challenge of Earth observation is to add a new dimension to the representation of the world

  • Points in the “ground” category occupy more than 60% of the entire training set, while points in the water or elevated road categories account for only 2% and 1%, respectively

  • For a long time point clouds, in particular those acquired by light detection and ranging (LiDAR), were regarded as a stand-alone product and were used to measure purely geometric properties of a scene, including height, changes in height, vegetation volume, etc

Read more

Summary

Introduction

A CURRENT challenge of Earth observation is to add a new dimension to the representation of the world. For critical applications such as flight management and urban planning to environmental monitoring of forests, floods, and landslides, 3-D models of the ground are an essential source of insightful information

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call