Real hyperspectral images (HSIs) are ineluctably contaminated by diverse types of noise, which severely limits the image usability. Recently, transfer learning has been introduced in hyperspectral denoising networks to improve model generalizability. However, the current frameworks often rely on image priors and struggle to retain the fidelity of background information. In this article, an unsupervised adaptation learning (UAL)-based hyperspectral denoising network (UALHDN) is proposed to address these issues. The core idea is first learning a general image prior for most HSIs, and then adapting it to a real HSI by learning the deep priors and maintaining background consistency, without introducing hand-crafted priors. Following this notion, a spatial-spectral residual denoiser, a global modeling discriminator, and a hyperspectral discrete representation learning scheme are introduced in the UALHDN framework, and are employed across two learning stages. First, the denoiser and the discriminator are pretrained using synthetic noisy-clean ground-based HSI pairs. Subsequently, the denoiser is further fine-tuned on the real multiplatform HSI according to a spatial-spectral consistency constraint and a background consistency loss in an unsupervised manner. A hyperspectral discrete representation learning scheme is also designed in the fine-tuning stage to extract semantic features and estimate noise-free components, exploring the deep priors specific for real HSIs. The applicability and generalizability of the proposed UALHDN framework were verified through the experiments on real HSIs from various platforms and sensors, including unmanned aerial vehicle-borne, airborne, spaceborne, and Martian datasets. The UAL denoising scheme shows a superior denoising ability when compared with the state-of-the-art hyperspectral denoisers.
Read full abstract