Abstract

Recent advances in sensor design allow us to gather more useful information about the Earth's surface. Examples are hyperspectral (HS) and Light Detection And Ranging (LiDAR) sensors. These, however, have limitations. HS data cannot distinguish different objects made from similar materials and highly suffers from cloud-shadow regions, whereas LiDAR cannot separate distinct objects that are at the same altitude. For an increased classification performance, fusion of HS and LiDAR data recently attracted interest but remains challenging. In particular, these methods suffer from a poor performance in cloud-shadow regions because of the lack of correspondence with shadow-free regions and insufficient training data. In this paper, we propose a new framework to fuse HS and LiDAR data for the classification of remote sensing scenes mixed with cloud-shadow. We process the cloud-shadow and shadow-free regions separately, our main contribution is the development of a novel method to generate reliable training samples in the cloud-shadow regions. Classification is performed separately in the shadow-free (classifier is trained by the available training samples) and cloud-shadow regions (classifier is trained by our generated training samples) by integrating spectral (i.e., original HS image), spatial (morphological features computed on HS image) and elevation (morphological features computed on LiDAR) features. The final classification map is obtained by fusing the results of the shadow-free and cloud-shadow regions. Experimental results on a real HS and LiDAR dataset demonstrate the effectiveness of the proposed method, as the proposed framework improves the overall classification accuracy with 4% for whole scene and 10% for shadow-free regions over the other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call