Dual stage adversarial domain adaptation for multi-model Hyperspectral image classification

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT Currently, hyperspectral image classification (HSIC) faces two major challenges: the attenuation of knowledge transfer efficiency caused by cross-domain distribution differences, and the insufficient generalization ability of representation learning under a single visual modality. Although scholars have attempted to address these challenges, there are still shortcomings in achieving deep cross-domain alignment and multimodal collaboration. Therefore, this paper proposes a dual stage adversarial domain adaptation (DSADA) HSIC framework which incorporates multi-modal learning. Specifically, this article proposes a dual stage adversarial learning framework that significantly alleviates the distribution shift between the source domain and the target domain and enhances the model’s cross-domain adaptability. In addition, the introduction of label text modality optimizes the similarity between image and text prototypes through cross-modal alignment mechanism to fully explore the complementary information between modalities and enhance feature discriminative power. This paper conducts systematic experimental verification on three hyperspectral image datasets, and the results show that DSADA can effectively alleviate cross-domain migration challenges in few shot scenarios and significantly improve the accuracy of HSIC.

Similar Papers
  • Research Article
  • 10.1080/01431161.2025.2538833
Cross-domain few-shot hyperspectral image classification based on circle loss and dense graph convolution
  • Aug 28, 2025
  • International Journal of Remote Sensing
  • Jinkang Gui + 4 more

Deep learning methods have demonstrated exceptional performance in hyperspectral image classification in recent years. Manual annotation is costly in practical applications of hyperspectral images, leading to a scarcity of labelled samples. At the same time, hyperspectral image classification faces challenges such as spectral variations within the object and identical spectra for different objects. However, current hyperspectral image classification methods are unable to extract spectral and spatial contextual features. Under few-shot conditions, the extracted features do not provide adequate discriminative power for both intra-class and inter-class samples. Moreover, traditional few-shot classification methods require the source and target domain data to have similar distributions in the feature space; otherwise, knowledge learned from the source domain cannot effectively transfer to the target domain. To address these shortcomings, we propose enhancing the feature extraction network’s capability to capture contextual features better, improving the discrimination between intra-class and inter-class samples through contrastive learning, and utilizing graph-structured information to alleviate domain shift issues in this paper. We propose a novel few-shot hyperspectral image classification framework (CDFSL-DGCCL) in which a circle contrastive loss function is introduced to reduce the similarity between features of different classes in the target domain and increase the similarity within the same class, thereby optimizing the feature extraction network. We propose a learning strategy combining dense graph convolutional networks and the circle contrastive loss function. Two densely connected graph convolutional networks are used to extract graph-structured information from the source and target domains, and the circle contrastive loss function is employed to optimize the feature similarity with graph-structured information. A new Global Context Module (GC-RCM) is embedded into the feature extraction network (SSCRNet) to capture global spectral and spatial contextual information. Experiments on four publicly available hyperspectral image datasets demonstrate that our method outperforms existing approaches. The code is available at https://github.com/GJINGKANG/CDFSL-DGCCL.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/igarss46834.2022.9883053
Source-Free Domain Adaptation for Cross-Scene Hyperspectral Image Classification
  • Jul 17, 2022
  • Zun Xu + 3 more

Deep learning based cross-domain hyperspectral image (HSI) classification methods were proposed to train a classifier adapted to unlabeled target domain with the help of abundant labeled data in source domain. Although the existing methods show their potential for cross-domain HSI classification, the data in source domain may not be provided due to the data privacy, which limits the availability of these methods. In this case, how to utilize the model or knowledge trained from source domain becomes a more challenging problem. In this study, we emphasize on this problem, and propose source-free unsupervised domain adaptation method for HSI classification. Specifically, we firstly design a source domain HSI spectral feature generator, and then realize the class-wised alignment between the generated source domain HSI spectral features and the target domain features of HSI through contrastive learning. To solve the dilemma of without labels in the target domain, we also utilize a logits-weighted prototype classifier to iteratively obtain the data label of the target domain. Experiments on two cross-scene HSI datasets demonstrate the effectiveness of the proposed method when only providing the model trained from the source domain.

  • Research Article
  • Cite Count Icon 137
  • 10.1016/j.patcog.2017.10.007
Active multi-kernel domain adaptation for hyperspectral image classification
  • Oct 13, 2017
  • Pattern Recognition
  • Cheng Deng + 3 more

Active multi-kernel domain adaptation for hyperspectral image classification

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 34
  • 10.1109/jstars.2023.3234302
Convolutional Transformer-Based Few-Shot Learning for Cross-Domain Hyperspectral Image Classification
  • Jan 1, 2023
  • IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
  • Yishu Peng + 3 more

In cross-domain hyperspectral image (HSI) classification, the labeled samples of the target domain are very limited, and it is a worthy attention to obtain sufficient class information from the source domain to categorize the target domain classes (both the same and new unseen classes). This article investigates this problem by employing few-shot learning (FSL) in a meta-learning paradigm. However, most existing cross-domain FSL methods extract statistical features based on convolutional neural networks (CNNs), which typically only consider the local spatial information among features, while ignoring the global information. To make up for these shortcomings, this article proposes novel convolutional transformer-based few-shot learning (CTFSL). Specifically, FSL is first performed in the classes of source and target domains simultaneously to build the consistent scenario. Then, a domain aligner is set up to map the source and target domains to the same dimensions. In addition, a convolutional transformer (CT) network is utilized to extract local-global features. Finally, a domain discriminator is executed subsequently that can not only reduce domain shift but also distinguish from which domain a feature originates. Experiments on three widely used hyperspectral image datasets indicate that the proposed CTFSL method is superior to the state-of-the-art cross-domain FSL methods and several typical HSI classification methods in terms of classification accuracy.

  • Research Article
  • Cite Count Icon 83
  • 10.1016/j.eswa.2023.119508
Multireceptive field: An adaptive path aggregation graph neural framework for hyperspectral image classification
  • Jan 7, 2023
  • Expert Systems with Applications
  • Zhili Zhang + 6 more

Multireceptive field: An adaptive path aggregation graph neural framework for hyperspectral image classification

  • Research Article
  • Cite Count Icon 34
  • 10.1109/tgrs.2020.3045790
Physically Constrained Transfer Learning Through Shared Abundance Space for Hyperspectral Image Classification
  • Aug 21, 2020
  • IEEE Transactions on Geoscience and Remote Sensing
  • Ying Qu + 5 more

Hyperspectral image (HSI) classification is one of the most active research topics and has achieved promising results boosted by the recent development of deep learning. However, most state-of-the-art approaches tend to perform poorly when the training and testing images are on different domains, e.g., the source domain and target domain, respectively, due to the spectral variability caused by different acquisition conditions. Transfer learning-based methods address this problem by pretraining in the source domain and fine-tuning on the target domain. Nonetheless, a considerable amount of data on the target domain has to be labeled and nonnegligible computational resources are required to retrain the whole network. In this article, we propose a new transfer learning scheme to bridge the gap between the source and target domains by projecting the HSI data from the source and target domains into a shared abundance space based on their own physical characteristics. In this way, the domain discrepancy would be largely reduced such that the model trained on the source domain could be applied to the target domain without extra efforts for data labeling or network retraining. The proposed method is referred to as physically constrained transfer learning through shared abundance space (PCTL-SAS). Extensive experimental results demonstrate the superiority of the proposed method as compared to the state of the art. The success of this endeavor would largely facilitate the deployment of HSI classification for real-world sensing scenarios.

  • Research Article
  • Cite Count Icon 37
  • 10.1016/j.knosys.2020.106319
Hyperspectral image classification based on discriminative locality preserving broad learning system
  • Jul 29, 2020
  • Knowledge-Based Systems
  • Yonghe Chu + 6 more

Hyperspectral image classification based on discriminative locality preserving broad learning system

  • Research Article
  • Cite Count Icon 11
  • 10.1109/tgrs.2022.3203980
A Cross-Level Spectral–Spatial Joint Encode Learning Framework for Imbalanced Hyperspectral Image Classification
  • Jan 1, 2022
  • IEEE Transactions on Geoscience and Remote Sensing
  • Dabing Yu + 4 more

Convolutional neural networks (CNNs) have dominated the research of hyperspectral image (HSI) classification, attributing to the superior feature representation capacity. Patch-free global learning (FPGA) as a fast learning framework for HSI classification has received wide interest. Despite their promising results from the perspective of fast inference, recent works have difficulty modeling spectral-spatial relationships with imbalanced samples. In this paper, we revisit the encoder–decoder-based fully convolutional network (FCN) and propose a cross-level spectral-spatial joint encoding framework (CLSJE) for Imbalanced HSI classification. First, a multi-scale input encoder and multiple-to-one multi-scale features connection are introduced to obtain abundant features and facilitate multi-scale contextual information flow between encoder and decoder. Second, in the encoder layer, we propose the spectral-spatial joint attention (SSJA) mechanism consisting of the high-frequency spatial attention (HFSA) and spectral-transform channel attention (STCA). HFSA and STCA encode spectral-spatial features jointly to improve the learning of the discriminative spectral-spatial features. Powered by these two components, CLSJE enjoys a high capability to capture both spatial and spectral dependencies for HSI classification. Besides, a class-proportion sampling strategy is developed to increase the attention to insufficiency samples. Extensive experiments demonstrate the superiority of our proposed CLSJE both at classification accuracy and inference speed, and show the state-of-the-art results on four benchmark datasets. Code can be obtained at: https://github.com/yudadabing/CLSJE.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 35
  • 10.3390/rs8120985
Robust Hyperspectral Image Classification by Multi-Layer Spatial-Spectral Sparse Representations
  • Nov 30, 2016
  • Remote Sensing
  • Xiaoyong Bian + 3 more

Sparse representation (SR)-driven classifiers have been widely adopted for hyperspectral image (HSI) classification, and many algorithms have been presented recently. However, most of the existing methods exploit the single layer hard assignment based on class-wise reconstruction errors on the subspace assumption; moreover, the single-layer SR is biased and less stable due to the high coherence of the training samples. In this paper, motivated by category sparsity, a novel multi-layer spatial-spectral sparse representation (mlSR) framework for HSI classification is proposed. The mlSR assignment framework effectively classifies the test samples based on the adaptive dictionary assembling in a multi-layer manner and intrinsic class-dependent distribution. In the proposed framework, three algorithms, multi-layer SR classification (mlSRC), multi-layer collaborative representation classification (mlCRC) and multi-layer elastic net representation-based classification (mlENRC) for HSI, are developed. All three algorithms can achieve a better SR for the test samples, which benefits HSI classification. Experiments are conducted on three real HSI image datasets. Compared with several state-of-the-art approaches, the increases of overall accuracy (OA), kappa and average accuracy (AA) on the Indian Pines image range from 3.02% to 17.13%, 0.034 to 0.178 and 1.51% to 11.56%, respectively. The improvements in OA, kappa and AA for the University of Pavia are from 1.4% to 21.93%, 0.016 to 0.251 and 0.12% to 22.49%, respectively. Furthermore, the OA, kappa and AA for the Salinas image can be improved from 2.35% to 6.91%, 0.026 to 0.074 and 0.88% to 5.19%, respectively. This demonstrates that the proposed mlSR framework can achieve comparable or better performance than the state-of-the-art classification methods.

  • Research Article
  • Cite Count Icon 12
  • 10.1049/cje.2020.05.003
Hyperspectral Image Classification Based on Unsupervised Heterogeneous Domain Adaptation CycleGan
  • Jul 1, 2020
  • Chinese Journal of Electronics
  • Xuesong Wang + 2 more

Aiming at the difficulty of obtaining sufficient labeled Hyperspectral image (HSI) data and the inconsistent feature distribution of different HSIs, a novel Unsupervised heterogeneous domain adaptation CycleGan (UHDAC) is proposed by using CycleGan to capture the transferable features in the absence of similar data. On the one hand, the two-way mapping is used to find the internal relationship between the source and target domain data, while the two-way adversary is used to constrain the source and target domain features, realizing the alignment of feature distributions. On the other hand, the CORAL loss function is introduced to minimize the distance between the second-order statistical difference between the source and target domain features, so as to solve the insufficient constraint of mapping relationship caused by the low consistency of HSI data structure in different domains. Experiments on three real HSI datasets show that UHDAC can effectively realize the unsupervised classification of target domain HSI with high classification accuracy by using the labeled HSI data in the source domain.

  • Research Article
  • Cite Count Icon 23
  • 10.1109/jstars.2021.3109012
Attention Multisource Fusion-Based Deep Few-Shot Learning for Hyperspectral Image Classification
  • Jan 1, 2021
  • IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
  • Xuejian Liang + 2 more

Recently, deep learning-based methods outperform others in hyperspectral image (HSI) classification. However, deep learning methods require sufficient labeled samples to improve performance, which is unfeasible in practice. The training labels are usually limited in HSIs that need to be classified (namely target domain), while other available labels in multisource HSIs (namely source domain) are not utilized effectively. To mitigate these issues, an attention multisource fusion method of few-shot learning (AMF-FSL) is proposed for small-sized HSI classification. AMF-FSL is an implementation of few-shot learning (FSL) in the meta-learning field, which can transfer the learned ability of classification from multiple source data to target data. The process of learning to classify in AMF-FSL is not restricted by the traditional requirement of the same distribution between the source and target domains, which can learn from the source domain and apply it to a different distribution in the target domain. Moreover, the multisource domain adaption in AMF-FSL has the capacity of extracting features from fused homogeneous and heterogeneous data in the source domain, which can improve the generalization of the classification model in the cross domains. Specifically, the multisource domain adaption contains three modules, namely the target-based class alignment, domain attention assignment, and multisource data fusion, which are responsible for aligning the class space, paying band-level attention, and merging the distributions of homogeneous and heterogeneous data in the source domain. The experimental results demonstrate the effectiveness of the multisource domain adaption and the superiority of AMF-FSL over other state-of-the-art methods in small-sized HSI classification

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 8
  • 10.3390/rs15071803
Local and Global Spectral Features for Hyperspectral Image Classification
  • Mar 28, 2023
  • Remote Sensing
  • Zeyu Xu + 3 more

Hyperspectral images (HSI) contain powerful spectral characterization capabilities and are widely used especially for classification applications. However, the rich spectrum contained in HSI also increases the difficulty of extracting useful information, which makes the feature extraction method significant as it enables effective expression and utilization of the spectrum. Traditional HSI feature extraction methods design spectral features manually, which is likely to be limited by the complex spectral information within HSI. Recently, data-driven methods, especially the use of convolutional neural networks (CNNs), have shown great improvements in performance when processing image data owing to their powerful automatic feature learning and extraction abilities and are also widely used for HSI feature extraction and classification. The CNN extracts features based on the convolution operation. Nevertheless, the local perception of the convolution operation makes CNN focus on the local spectral features (LSF) and weakens the description of features between long-distance spectral ranges, which will be referred to as global spectral features (GSF) in this study. LSF and GSF describe the spectral features from two different perspectives and are both essential for determining the spectrum. Thus, in this study, a local-global spectral feature (LGSF) extraction and optimization method is proposed to jointly consider the LSF and GSF for HSI classification. To increase the relationship between spectra and the possibility to obtain features with more forms, we first transformed the 1D spectral vector into a 2D spectral image. Based on the spectral image, the local spectral feature extraction module (LSFEM) and the global spectral feature extraction module (GSFEM) are proposed to automatically extract the LGSF. The loss function for spectral feature optimization is proposed to optimize the LGSF and obtain improved class separability inspired by contrastive learning. We further enhanced the LGSF by introducing spatial relation and designed a CNN constructed using dilated convolution for classification. The proposed method was evaluated on four widely used HSI datasets, and the results highlighted its comprehensive utilization of spectral information as well as its effectiveness in HSI classification.

  • Research Article
  • Cite Count Icon 193
  • 10.1109/tgrs.2019.2902568
Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer Learning
  • Apr 18, 2019
  • IEEE Transactions on Geoscience and Remote Sensing
  • Haokui Zhang + 5 more

Recently, hyperspectral image (HSI) classification approaches based on deep learning (DL) models have been proposed and shown promising performance. However, because of very limited available training samples and massive model parameters, DL methods may suffer from overfitting. In this paper, we propose an end-to-end 3-D lightweight convolutional neural network (CNN) (abbreviated as 3-D-LWNet) for limited samples-based HSI classification. Compared with conventional 3-D-CNN models, the proposed 3-D-LWNet has a deeper network structure, less parameters, and lower computation cost, resulting in better classification performance. To further alleviate the small sample problem, we also propose two transfer learning strategies: 1) cross-sensor strategy, in which we pretrain a 3-D model in the source HSI data sets containing a greater number of labeled samples and then transfer it to the target HSI data sets and 2) cross-modal strategy, in which we pretrain a 3-D model in the 2-D RGB image data sets containing a large number of samples and then transfer it to the target HSI data sets. In contrast to previous approaches, we do not impose restrictions over the source data sets, in which they do not have to be collected by the same sensors as the target data sets. Experiments on three public HSI data sets captured by different sensors demonstrate that our model achieves competitive performance for HSI classification compared to several state-of-the-art methods

  • Research Article
  • Cite Count Icon 22
  • 10.1016/j.neucom.2020.05.082
Discriminant sub-dictionary learning with adaptive multiscale superpixel representation for hyperspectral image classification
  • Jun 1, 2020
  • Neurocomputing
  • Xiao Tu + 5 more

Discriminant sub-dictionary learning with adaptive multiscale superpixel representation for hyperspectral image classification

  • Research Article
  • Cite Count Icon 13
  • 10.1109/lgrs.2018.2889800
Robust Hyperspectral Image Domain Adaptation With Noisy Labels
  • Jul 1, 2019
  • IEEE Geoscience and Remote Sensing Letters
  • Wei Wei + 5 more

In hyperspectral image (HSI) classification, domain adaptation (DA) methods have been proved effective to address unsatisfactory classification results caused by the distribution difference between training (i.e., source domain) and testing (i.e., target domain) pixels. However, these methods rely on accurate labels in source domain, and seldom consider the performance drop resulted by noisy label, which often happens since labeling pixel in HSI is a challenging task. To improve the robustness of DA method to label noise, we propose a new unsupervised HSI DA method, which is constructed from both feature-level and classifier-level. First, a linear transformation function is learned in feature-level to align the source (domain) subspace with the target (domain) subspace. Then, a robust low-rank representation based classifier is developed to well cope with the features obtained from the aligned subspace. Since both subspace alignment and the classifier are immune to noisy labels, the proposed method obtains good classification results when confronting with noisy labels in source domain. Experimental results on two DA benchmarks demonstrate the effectiveness of the proposed method.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon