Articles published on Hyperspectral Image Classification
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
4016 Search results
Sort by Recency
- New
- Research Article
- 10.1109/tnnls.2025.3608294
- Jan 1, 2026
- IEEE transactions on neural networks and learning systems
- Jiaojiao Li + 5 more
Recently, domain alignment and metric-based few-shot learning (FSL) have been introduced into hyperspectral image classification (HSIC) to solve the issues of uneven data distribution and scarcity of annotated data faced in practical applications. However, existing cross-domain few-shot methods ignore pivotal frequency priors of the complex field, which contribute to better category discrimination and knowledge transfer. To address this issue, we propose a novel physics-guided time-interactive-frequency network (PTFNet) for cross-domain few-shot HSIC, enabling the extraction of both frequency priors and spatial features (termed "time domain" following Fourier convention) simultaneously through a lightweight time-interactive-frequency module (TiF-Module) as a pioneering effort. Meanwhile, a spectral Fourier-based augmentation module (SFA-Module) is designed to decouple the frequency priors and enhance the diversity of distribution of physical attributes to imitate the domain shift. Then, the physics consistency loss is introduced to regularize the diverse embeddings to approximate the center of each category's physical attributes, guiding the network to excavate more transferable knowledge of source domain (SD). Furthermore, to fully exploit the discriminant time-frequency information and further improve the accuracy of boundary pixels, a set of multiorientation homogeneous prototypes is adopted to represent each class comprehensively, and an intuitive and flexible uncertainty-rectified bidirectional random walk strategy is applied to replace the Euclidean metric for more reliable classification. The experimental results on four public datasets demonstrate the prominent performance of the proposed PTFNet.
- New
- Research Article
- 10.1016/j.rsase.2025.101823
- Jan 1, 2026
- Remote Sensing Applications: Society and Environment
- Mohammed Q Alkhatib + 1 more
MixerCA: An efficient and accurate model for high-performance hyperspectral image classification
- New
- Research Article
- 10.1109/tgrs.2025.3650003
- Jan 1, 2026
- IEEE Transactions on Geoscience and Remote Sensing
- Kai Deng + 6 more
Cross-Scene Open-Set Hyperspectral Image Classification via Joint Distribution Matching and Unknown Class Uncertainty Suppression
- New
- Research Article
- 10.1016/j.asoc.2025.114209
- Jan 1, 2026
- Applied Soft Computing
- Xiqun Song + 4 more
Dual-student co-training network using Mamba and unreliable sample learning with class-adaptation for hyperspectral image classification
- New
- Research Article
- 10.1016/j.eswa.2025.128842
- Jan 1, 2026
- Expert Systems with Applications
- Chi Wang + 4 more
Cross-scene hyperspectral image classification based on cross-domain feature extraction and category decision collaborative optimization
- New
- Research Article
- 10.1016/j.knosys.2025.114908
- Jan 1, 2026
- Knowledge-Based Systems
- Shuai Ma + 3 more
Spectral context-aware frequency alignment for few-shot hyperspectral image classification
- New
- Research Article
- 10.1016/j.infrared.2025.106228
- Jan 1, 2026
- Infrared Physics & Technology
- Yixin Yang + 5 more
Short- and long-range graph convolutional network with decoupled feature propagation for hyperspectral image classification
- New
- Research Article
2
- 10.1016/j.eswa.2025.129198
- Jan 1, 2026
- Expert Systems with Applications
- Zitong Zhang + 3 more
HorD <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"> <mml:msup> <mml:mrow/> <mml:mn>2</mml:mn> </mml:msup> </mml:math> CN: High-order deformable differential convolution network for hyperspectral image classification
- New
- Research Article
- 10.1016/j.eswa.2025.129153
- Jan 1, 2026
- Expert Systems with Applications
- Hui Yan + 4 more
Different-hop node interactions graph attention network with cross-scale guided feature fusion for hyperspectral image classification
- New
- Research Article
- 10.1109/tgrs.2025.3649914
- Jan 1, 2026
- IEEE Transactions on Geoscience and Remote Sensing
- Gu Gong + 6 more
MSIA: A Multi-Scale Interactive Attention Network Assisted by Self-Supervised Contrastive Learning for Hyperspectral Image Classification
- New
- Research Article
- 10.1016/j.optlastec.2025.114182
- Jan 1, 2026
- Optics & Laser Technology
- Pengfei Zhu + 1 more
Channel-wise transformer with spectral-spatial gated self-attention for hyperspectral image classification
- New
- Research Article
- 10.3390/s26010174
- Dec 26, 2025
- Sensors (Basel, Switzerland)
- Praveen Pankajakshan + 2 more
We present a novel framework for hyperspectral satellite image classification that explicitly balances spatial nearness with spectral similarity. The proposed method is trained on closed-set datasets, and it generalizes well to open-set agricultural scenarios that include both class distribution shifts and presence of novel and absence of known classes. This scenario is reflective of real-world agricultural conditions, where geographic regions, crop types, and seasonal dynamics vary widely and labeled data are scarce and expensive. The input data are projected onto a lower-dimensional spectral manifold, and a pixel-wise classifier generates an initial class probability saliency map. A kernel-based spectral-spatial weighting strategy fuses the spatial-spectral features. The proposed approach improves the classification accuracy by – over spectral-only models on benchmark datasets. Incorporating an additional unsupervised learning refinement step further improves accuracy, surpassing several recent state-of-the-art methods. Requiring only 1– labeled training data and at most two tuneable parameters, the framework operates with minimal computational overhead, qualifying it as a data-efficient and scalable few-shot learning solution. Recent deep architectures although exhibit high accuracy under data rich conditions, often show limited transferability under low-label, open-set agricultural conditions. We demonstrate transferability to new domains—including unseen crop classes (e.g., paddy), seasons, and regions (e.g., Piedmont, Italy)—without re-training. Rice paddy fields play a pivotal role in global food security but are also a significant contributor to greenhouse gas emissions, especially methane, and extent mapping is very critical. This work presents a novel perspective on hyperspectral classification and open-set adaptation, suited for sustainable agriculture with limited labels and low-resource domain generalization.
- New
- Research Article
- 10.1117/1.jrs.19.046520
- Dec 24, 2025
- Journal of Applied Remote Sensing
- Tianxiang Zhang + 4 more
FSTNet: frequency spectral transformer for hyperspectral image classification
- New
- Research Article
- 10.1038/s41598-025-27835-8
- Dec 22, 2025
- Scientific Reports
- Xianjian Shi + 4 more
Hyperspectral image classification is a critical task in remote sensing, but existing methods often employ fixed feature fusion strategies, making it difficult to adapt to the data characteristics of different scenarios. Additionally, there is a lack of effective synergy between multi-scale feature extraction and attention mechanisms. To address this issue, this paper proposes a dynamic gated fusion network with hierarchical multi-scale attention (DGFNet). This method comprises three core modules: the multi-scale feature aggregator (MSFA), which uses a pyramid expansion convolution structure to concurrently extract spatial features with different receptive fields, achieving comprehensive scale coverage from local texture to global context; the enhanced channel-spatial attention (ECSA) module, which employs multi-pooling strategies and a cascaded structure to achieve deep interaction between channel and spatial attention, thereby adaptively enhancing discriminative features; and the dynamic gated fusion module, which learns input-related fusion weights to adaptively adjust the contribution ratios of multi-scale features and attention features based on data characteristics. Experimental results on four benchmark datasets—Pavia University, Houston, Indian Pines, and WHU-HongHu—show that DGFNet achieves overall accuracy rates of 96.91, 97.12, 94.05, and 94.46%, respectively, representing significant improvements over existing state-of-the-art methods. Ablation experiments thoroughly validate the effectiveness and necessity of each module. Additionally, this paper systematically compares five different fusion strategies (cross-attention, hierarchical fusion, parallel fusion, recurrent fusion, and sequential fusion). Experimental results demonstrate that dynamic gated fusion outperforms other fusion methods in terms of classification accuracy, computational efficiency, and model stability. This method provides an efficient, accurate, and robust solution for hyperspectral image classification. The code will be published on https://github.com/willianbilledu-alt/DGFNet.
- New
- Research Article
- 10.1007/s13369-025-11001-3
- Dec 22, 2025
- Arabian Journal for Science and Engineering
- Guandong Li + 1 more
Dynamic 3D KAN Convolution with Adaptive Grid Optimization for Hyperspectral Image Classification
- New
- Research Article
- 10.1007/s12145-025-02035-0
- Dec 22, 2025
- Earth Science Informatics
- Jyoti Maggu + 2 more
Smart pixels: Interpretable active dictionary learning with spatial coherence regularization for hyperspectral image classification
- Research Article
- 10.1038/s41597-025-06404-8
- Dec 17, 2025
- Scientific data
- Ashish Mani + 10 more
In this paper we introduce a new large-scale hyperspectral satellite image dataset named OHID-FF, specifically designed for forest fire detection and classification tasks. The OHID-FF dataset comprises 1,197 hyperspectral images from 22 different scenarios, with each image featuring 32 spectral bands and a spatial resolution of 10 meters per pixel. The dataset covers 22 locations in Australia, encompassing urban areas, mountainous regions, oceans, and other terrains. Compared to existing fire datasets, OHID-FF offers a richer volume of data and higher imaging quality, making it an ideal choice for training deep neural networks. Through benchmark experiments on this dataset, we found that existing methods face challenges in accurately classifying OHID-FF data, setting a new benchmark for hyperspectral imaging classification. Additionally, we provide detailed descriptions of the dataset preparation process, data sources, tile creation, and annotation procedures. Furthermore, we present experimental results using different deep learning models for fire detection and image classification, demonstrating the potential of this dataset in practical applications.
- Research Article
- 10.3390/electronics14244935
- Dec 16, 2025
- Electronics
- Chengjie Guo + 3 more
Deep learning (DL), a hierarchical feature extraction method, has garnered increasing attention in the remote sensing community. Recently, self-supervised learning (SSL) methods in DL have gained wide recognition due to their ability to mitigate the dependence on both the quantity and quality of samples. This advantage is particularly significant when dealing with limited labeled samples in hyperspectral images (HSIs). However, conventional SSL methods face two main challenges. They struggle to construct self-supervised signals based on the unique characteristics of HSI. Moreover, they fail to design network optimization strategies that leverage the intrinsic manifold geometry within HSI. To tackle these issues, we propose a novel self-supervised learning method termed Manifold Geometry-Leveraged Self-supervised Learning (MSSL) for HSI classification. The approach employs a two-stage training strategy. In the initial pre-training stage, it develops self-supervised signals that exploit spatial homogeneity and spectral coherence properties of HSI. Furthermore, it introduces a manifold geometry-guided loss function that enhances feature discrimination by increasing intra-class compactness and inter-class separation. The second stage is a fine-tuning phase utilizing a small set of labeled samples. This stage optimizes the pre-trained model, enabling effective feature extraction from hyperspectral data for classification tasks. Experiments conducted on real-world HSI datasets demonstrate that MSSL achieves superior classification performance compared to several relevant state-of-the-art methods.
- Research Article
- 10.1038/s41598-025-30660-8
- Dec 15, 2025
- Scientific Reports
- Xingyue Zhang + 7 more
To address the prevalent issues in the classification of hyperspectral image (HSI) and light detection and ranging (LiDAR) data fusion, such as insufficient dynamic adaptive interaction of cross-modal features, and difficulties in high-fidelity spatial detail reconstruction, this paper proposes an end-to-end LiDAR-dynamic-guided GAN for hyperspectral image hierarchical reconstruction and classification (ELDGG). The core framework of the network consists of a guided hierarchical reconstruction generator (GHR-Generator) and a perception-enhanced spectral regularization discriminator (PSR-Discriminator). First, we propose the cross-modal parameter-adaptive fusion module (CPAF-Module), which leverages the global context of LiDAR data to generate dynamic convolutional operators tailored for HSI features, addressing the limitations of static fusion methods. Second, to enhance the reconstruction quality of spatial details, we design the LiDAR-guided neural implicit field reconstruction unit (L-GNIF Unit). By learning a continuous mapping from coordinates to features, it achieves high-fidelity and artifact-free feature space reconstruction. Furthermore, we innovatively integrate spectral normalization constraints with a multi-level feature matching mechanism to construct the PSR-Discriminator. This discriminator provides more comprehensive perceptual signals across three scales: shallow textures, mid-level structures, and deep semantics. The entire framework is optimized through end-to-end training and a joint multi-task optimization loss function, ensuring that the generated fused features exhibit both authenticity and class discriminability. On this basis, we further design a spatial-spectral refinement classifier (SSR-Classifier) to accurately decode the deeply optimized feature maps, ultimately producing high-precision land cover classification results. Experiments demonstrate ELDGG’s superiority over state-of-the-art methods in both fusion quality and classification accuracy.
- Research Article
- 10.3390/rs17244035
- Dec 15, 2025
- Remote Sensing
- Wenyi Hu + 4 more
Convolutional Neural Networks (CNNs) have been extensively applied for the extraction of deep features in hyperspectral imagery tasks. However, traditional 3D-CNNs are limited by their fixed-size receptive fields and inherent locality. This restricts their ability to capture multi-scale objects and model long-range dependencies, ultimately hindering the representation of large-area land-cover structures. To overcome these drawbacks, we present a new framework designed to integrate multi-scale feature fusion and a hierarchical attention mechanism for hyperspectral image classification. Channel-wise Squeeze-and-Excitation (SE) and Convolutional Block Attention Module (CBAM) spatial attention are combined to enhance feature representation from both spectral bands and spatial locations, allowing the network to emphasize critical wavelengths and salient spatial structures. Finally, by integrating the self-attention inherent in the Transformer architecture with a Cross-Attention Fusion (CAF) mechanism, a local-global feature fusion module is developed. This module effectively captures extended-span interdependencies present in hyperspectral remote sensing images, and this process facilitates the effective integration of both localized and holistic attributes. On the Salinas Valley dataset, the proposed method delivers an Overall Accuracy (OA) of 0.9929 and an Average Accuracy (AA) of 0.9949, attaining perfect recognition accuracy for certain classes. The proposed model demonstrates commendable class balance and classification stability. Across multiple publicly available hyperspectral remote sensing image datasets, it systematically produces classification outcomes that significantly outperform those of established benchmark methods, exhibiting distinct advantages in feature representation, structural modeling, and the discrimination of complex ground objects.