The joint classification of hyperspectral imagery (HSI) and LiDAR data is an important task in the field of remote sensing image interpretation. Traditional classification methods, such as support vector machine (SVM) and random forest (RF), have difficulty capturing the complex spectral–spatial–elevation correlation information. Recently, important progress has been made in HSI-LiDAR classification using Convolutional Neural Networks (CNNs) and Transformers. However, due to the large spatial extent of remote sensing images, the vanilla Transformer and CNNs struggle to effectively capture global context. Moreover, the weak misalignment between multi-source data poses challenges for their effective fusion. In this paper, we introduce AFA–Mamba, an Adaptive Feature Alignment Network with a Global–Local Mamba design that achieves accurate land cover classification. It contains two main core designs: (1) We first propose a Global–Local Mamba encoder, which effectively models context through a 2D selective scanning mechanism while introducing local bias to enhance the spatial features of local objects. (2) We also propose an SSE Adaptive Alignment and Fusion (A2F) module to adaptively adjust the relative positions between multi-source features. This module establishes a guided subspace to accurately estimate feature-level offsets, enabling optimal fusion. As a result, our AFA–Mamba consistently outperforms state-of-the-art multi-source fusion classification approaches across multiple datasets.
Read full abstract