The objective of hyperspectral pansharpening is to fuse low-resolution hyperspectral images (LR-HSI) with corresponding panchromatic (PAN) images to generate high-resolution hyperspectral images (HR-HSI). Despite advancements in hyperspectral (HS) pansharpening using deep learning, the rich spectral details and large data volume of HS images place higher demands on models for effective spectral extraction and processing. In this paper, we present HyperGAN, a hyperspectral image fusion approach based on Generative Adversarial Networks. Unlike previous methods that deepen the network to capture spectral information, HyperGAN widens the structure with a Wide Block for multi-scale learning, effectively capturing global and local details from upsampled HSI and PAN images. While LR-HSI provides rich spectral data, PAN images offer spatial information. We introduce the Efficient Spatial and Channel Attention Module (ESCA) to integrate these features and add an energy-based discriminator to enhance model performance by learning directly from the Ground Truth (GT), improving fused image quality. We validated our method on various scenes, including the Pavia Center, Eastern Tianshan, and Chikusei. Results show that HyperGAN outperforms state-of-the-art methods in visual and quantitative evaluations.
Read full abstract