Abstract

Hyperspectral unmixing is an important and challenging task in the field of remote sensing which arises when the spatial resolution of sensors is insufficient for the separation of spectrally distinct materials. Hyperspectral images, like other natural images, have highly correlated pixels and it is very desirable to make use of this spatial information. In this paper, a deep learning based method for blind hyperspectral unmixing is presented. The method uses multitask learning through multiple parallel autoencoders to unmix a neighborhood of pixels simultaneously. Operating on image patches instead of single pixels enables the method to take advantage of spatial information in the hyperspectral image. The method is the first in its class to directly utilize the spatial structure of hyperspectral images (HSIs) for the estimation of the spectral signatures of endmembers in the data cube. We evaluate the proposed method using two real HSIs and compare it to seven state-of-the-art methods that either rely only on spectral or both on spectral and spatial information in the HSIs. The proposed method outperforms all the baseline unmixing methods in experiments.

Highlights

  • Due to physical limitations of the sensors used in the acquisition of hyperspectral imagery, their spatial resolution is insufficient to separate spectrally distinct materials in the scene, resulting in mixed pixels

  • The multitask autoencoder unxmixing (MTAEU) method achieves the lowest average MSE score. It has the least variance of all the methods. This is not surprising, both given the benefits of multitask learning (MTL) discussed earlier, and the fact that the abundance maps for MTAEU are the mean of all the abundance maps of the k2 autoencoders in the network

  • This paper introduced a novel autoencoder based method for hyperspectral unmixing (HSU) that uses many autoencoders in parallel to benefit from multitask learning and exploit the spatial correlations in an hyperspectral images (HSIs)

Read more

Summary

INTRODUCTION

Due to physical limitations of the sensors used in the acquisition of hyperspectral imagery, their spatial resolution is insufficient to separate spectrally distinct materials in the scene, resulting in mixed pixels. Through the sharing of the first hidden layer between autoencoders, each autoencoder or task has access to all features from all the pixels input to the network By selecting these pixels from a neighborhood in the HSI, we are exploiting the spatial correlation in the HSI, i.e., the assumption that all the pixels from a small neighborhood should have similar abundances. Examples of this are [39]–[41] and [42] which applies a spatial group sparsity regularization derived from segmentation What all these methods have in common, is the use of natural assumptions about the spatial correlation of pixels in an HSI as priors to control sparsity and smoothness of the obtained abundance maps. For layers shared between branches, the subscript i will be dropped for activations and weights

PROBLEM FORMULATION
LOSS FUNCTION
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call