Abstract

In remote sensing, hyperspectral and polarimetric synthetic aperture radar (PolSAR) images are the two most versatile data sources for a wide range of applications such as land use land cover classification. However, the fusion of these two data sources receive less attention than many other, because of their scarce data availability, and relatively challenging fusion task caused by their distinct imaging geometries. Among the existing fusion methods, including manifold learning-based, kernel-based, ensemble-based, and matrix factorization, manifold learning is one of most celebrated techniques for the fusion of heterogeneous data. Therefore, this paper aims to promote the research in hyperspectral and PolSAR data fusion, by providing a comprehensive comparison between existing manifold learning-based fusion algorithms. We conducted experiments on 16 state-of-the-art manifold learning algorithms that embrace two important research questions in manifold learning-based fusion of hyperspectral and PolSAR data: (1) in which domain should the data be aligned—the data domain or the manifold domain; and (2) how to make use of existing labeled data when formulating a graph to represent a manifold—supervised, semi-supervised, or unsupervised. The performance of the algorithms were evaluated via multiple accuracy metrics of land use land cover classification over two data sets. Results show that the algorithms based on manifold alignment generally outperform those based on data alignment (data concatenation). Semi-supervised manifold alignment fusion algorithms performs the best among all. Experiments using multiple classifiers show that they outperform the benchmark data alignment-based algorithms by ca. 3% in terms of the overall classification accuracy.

Highlights

  • This paper investigates the performance of manifold learning technique on the fusion of hyperspectral and polarimetric synthetic aperture radar (SAR) (PolSAR) data, based on four state-of-art algorithms, locality preservation projection (LPP) [52], generalized graph fusion (GGF) [48], manifold alignment (MA) [36,44], and MAPPER-induced manifold alignment (MIMA) [53]

  • The result supports the discussion of the two fusion approaches, the data alignment-based and the manifold alignment-based, for the fusion of the hyperspectral image and PolSAR data

  • This paper compares 16 variants of four state-of-the-art multi-sensory data fusion algorithms based on manifold learning

Read more

Summary

Introduction

Multi-modal data fusion [1,2,3,4,5,6,7] continuously draws attention in the remote sensing community. The fusion of optical and synthetic aperture radar (SAR) data, two important yet intrinsically different data sources, has began to appear frequently in the context of multi-modal data fusion [8,9,10,11,12,13,14]. Among all optical data [18,19,20], hyperspectral data are well known for their distinguishing power that originates from their rich spectral information [21,22,23,24]. It is of great interest to investigate the fusion of hyperspectral and PolSAR images, especially with the application to land use land cover classification (LULC)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call