Abstract

Deep learning (DL) has heavily impacted the data-intensive field of remote sensing. Autoencoders are a type of DL methods that have been found to be powerful for blind hyperspectral unmixing (HU). HU is the process of resolving the measured spectrum of a pixel into a combination of a set of spectral signatures called endmembers and simultaneously determining their fractional abundances in the pixel. This article details the various autoencoder architectures used in HU and provides a critical comparison of some of the existing published blind unmixing methods based on autoencoders. Eleven different autoencoder methods and one traditional method will be compared in blind unmixing experiments using four real datasets and four synthetic datasets with different spectral variability. Additionally, extensive ablation experiments with a simple spectral unmixing autoencoder will be performed. The results are interpreted in terms of the various implementation details, and the question of why autoencoder methods are so powerful compared to traditional methods is unraveled. The source codes for all methods implemented in this article can be found at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/burknipalsson/hu_autoencoders</uri> .

Highlights

  • Over the last decade, deep learning (DL) has opened new possibilities in processing data in data-intensive fields such as hyperspectral imaging

  • The ablation experiments aim to demonstrate what makes autoencoder-based methods powerful compared to traditional methods

  • The endmember extracted by methods will be evaluated using the mean spectral angle distance, given by aj 2 aj 2 where ai are the endmembers extracted by the method, and ai are the reference endmembers

Read more

Summary

Introduction

Deep learning (DL) has opened new possibilities in processing data in data-intensive fields such as hyperspectral imaging. Hyperspectral imaging belongs to imaging spectrometry, where an entire spectrum is acquired at every pixel. The technique has been defined by Goetz et al in [1] as "the acquisition of images in hundreds of contiguous, registered, spectral bands such that for each pixel a radiance spectrum can be derived". Because of the high spectral resolution, hyperspectral imaging data is very high dimensional and large in size compared to data from other imaging techniques. This high dimensionality, along with both linear and nonlinear spectral mixing, requires sophisticated data analysis methods. These methods are often in the form of non-convex optimization and modelling [2]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call