Abstract

Hyperspectral images (HSIs) encapsulate a vast amount of information due to their expansive size and high number of channel dimensions. However, they are insufficiently utilized for ineffective feature extraction, particularly for regions with few samples and predominant edges. To fully leverage the spatial–spectral features of HSIs, a dual-branch multi-scale spatial–spectral residual attention network (MSRAN) that integrates multi-scale feature extraction with residual attention mechanisms is proposed. MSRAN independently extracts spatial and spectral features through dual branches, minimizing the interference between these features and enhancing the focus on feature extraction in different dimensions. Specifically, in the spectral feature extraction branch, diverse-scale 3D convolution kernels capture extended spectral sequence characteristics and neighborhood spectral features. The convolution fusion emphasizes the weight of the central pixel to be classified, followed by the use of spectral residual attention mechanisms to extract enhanced central-pixel spectral features. In the spatial feature extraction branch, multi-level receptive fields are utilized to extract various fine-grained spatial contours, edges, and local detailed features, which are further processed through spatial residual attention to effectively extract spatial composite features. Finally, the convolution fusion module adaptively integrates the center-enhanced spectral features with multi-level fine-grained spatial features for classification. Extensive comparative experiments and ablation studies demonstrate that MSRAN achieves highly competitive results on two classic datasets from Pavia University and Salinas as well as on a novel dataset of WHU-Hi-LongKou.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call