Abstract

To alleviate the shortcomings of target detection in only one aspect and reduce redundant information among adjacent bands, we propose a spectral–spatial target detection (SSTD) framework in deep latent space based on self-spectral learning (SSL) with a spectral generative adversarial network (GAN). The concept of SSL is introduced into hyperspectral feature extraction in an unsupervised fashion with the purpose of background suppression and target saliency. In particular, a novel structure-to-structure selection rule that takes full account of the structure, contrast, and luminance similarity is established to interpret the mapping relationship between the latent spectral feature space and the original spectral band space, to generate the optimal spectral band subset without any prior knowledge. Finally, the comprehensive result is achieved by nonlinearly combining the spatial detection on the fused latent features with the spectral detection on the selected band subset and the corresponding selected target signature. This paper paves a novel self-spectral learning way for hyperspectral target detection and identifies sensitive bands for specific targets in practice. Comparative analyses demonstrate that the proposed SSTD method presents superior detection performance compared with CSCR, ACE, CEM, hCEM, and ECEM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call