Abstract

The relatively coarse spatial resolution of hyperspectral images causes the mixing of disparate materials’ spectral responses in the sensor's instantaneous field of view (IFOV), resulting in mixed pixels. The current study proposes a capsule-based generative encoding model, called a denoising unmixing encoder network (DUENet), to formulate an end-to-end trainable spectral unmixing model. The reconstruction and cross-entropy losses and input prior-based constraints achieve joint optimization of denoising, data imputation, and spectral unmixing. Unlike earlier approaches, interpolation-based convolution and dynamic time wrapping (DTW)-based convolutional units facilitate DUENet to even unmix noisy spectra. In addition to embedding label information to improve the physical significance of the latent space, DUENet dynamically learns the parameters of the interpolation kernels. Benchmark airborne hyperspectral datasets (the Nabesna and Cuprite datasets) and simulated datasets were employed to evaluate the performance of the proposed approach. It was observed that the proposed joint optimization of spectral unmixing and denoising significantly improves the results. The adopted feature characterization using capsules improves the generalizability and gives good results, even with a limited number of training samples. This study shows the need for interpretability-based evaluation measures to analyze the unmixing frameworks based on the concepts learned for each endmember. The experiments confirm that the proposed strategy significantly reduces the models’ sensitivity to network parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call