Multiscale Feature Knowledge Distillation and Implicit Object Discovery for Few-Shot Object Detection in Remote Sensing Images

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Multiscale Feature Knowledge Distillation and Implicit Object Discovery for Few-Shot Object Detection in Remote Sensing Images

ReferencesShowing 10 of 53 papers
  • Open Access Icon
  • Cite Count Icon 97
  • 10.1109/tgrs.2021.3051383
Few-Shot Object Detection on Remote Sensing Images
  • Feb 2, 2021
  • IEEE Transactions on Geoscience and Remote Sensing
  • Xiang Li + 2 more

  • Open Access Icon
  • Cite Count Icon 513
  • 10.1109/ijcnn48605.2020.9207304
Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning
  • Dec 24, 2019
  • Eric Arazo + 4 more

  • Cite Count Icon 97
  • 10.1109/lgrs.2019.2930462
Multi-Scale Spatial and Channel-wise Attention for Improving Object Detection in Remote Sensing Imagery
  • Aug 29, 2019
  • IEEE Geoscience and Remote Sensing Letters
  • Jie Chen + 4 more

  • Cite Count Icon 1118
  • 10.1109/tgrs.2017.2783902
When Deep Learning Meets Metric Learning: Remote Sensing Image Scene Classification via Learning Discriminative CNNs
  • May 1, 2018
  • IEEE Transactions on Geoscience and Remote Sensing
  • Gong Cheng + 4 more

  • Cite Count Icon 9
  • 10.1109/tip.2020.3006397
DID: Disentangling-Imprinting-Distilling for Continuous Low-Shot Detection
  • Jan 1, 2020
  • IEEE Transactions on Image Processing
  • Xianyu Chen + 3 more

  • Cite Count Icon 25
  • 10.1109/tgrs.2021.3091003
Solo-to-Collaborative Dual-Attention Network for One-Shot Object Detection in Remote Sensing Images
  • Jan 1, 2022
  • IEEE Transactions on Geoscience and Remote Sensing
  • Lingjun Li + 5 more

  • Open Access Icon
  • Cite Count Icon 89
  • 10.1109/cvpr52688.2022.01384
Label, Verify, Correct: A Simple Few Shot Object Detection Method
  • Jun 1, 2022
  • Prannay Kaul + 2 more

  • Cite Count Icon 426
  • 10.1016/j.isprsjprs.2018.04.003
Multi-scale object detection in remote sensing imagery with convolutional neural networks
  • May 2, 2018
  • ISPRS Journal of Photogrammetry and Remote Sensing
  • Zhipeng Deng + 5 more

  • Open Access Icon
  • Cite Count Icon 4
  • 10.1016/j.jag.2024.103675
Adaptive meta-knowledge transfer network for few-shot object detection in very high resolution remote sensing images
  • Feb 7, 2024
  • International Journal of Applied Earth Observation and Geoinformation
  • Xi Chen + 9 more

  • Open Access Icon
  • Cite Count Icon 34279
  • 10.1109/cvpr.2016.91
You Only Look Once: Unified, Real-Time Object Detection
  • Jun 1, 2016
  • Joseph Redmon + 3 more

Similar Papers
  • Research Article
  • Cite Count Icon 143
  • 10.1016/j.eswa.2022.116793
Remote sensing image super-resolution and object detection: Benchmark and state of the art
  • Mar 2, 2022
  • Expert Systems with Applications
  • Yi Wang + 7 more

Remote sensing image super-resolution and object detection: Benchmark and state of the art

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 19
  • 10.3390/rs13193890
LR-TSDet: Towards Tiny Ship Detection in Low-Resolution Remote Sensing Images
  • Sep 28, 2021
  • Remote Sensing
  • Jixiang Wu + 3 more

Recently, deep learning-based methods have made great improvements in object detection in remote sensing images (RSIs). However, detecting tiny objects in low-resolution images is still challenging. The features of these objects are not distinguishable enough due to their tiny size and confusing backgrounds and can be easily lost as the network deepens or downsamples. To address these issues, we propose an effective Tiny Ship Detector for Low-Resolution RSIs, abbreviated as LR-TSDet, consisting of three key components: a filtered feature aggregation (FFA) module, a hierarchical-atrous spatial pyramid (HASP) module, and an IoU-Joint loss. The FFA module captures long-range dependencies by calculating the similarity matrix so as to strengthen the responses of instances. The HASP module obtains deep semantic information while maintaining the resolution of feature maps by aggregating four parallel hierarchical-atrous convolution blocks of different dilation rates. The IoU-Joint loss is proposed to alleviate the inconsistency between classification and regression tasks, and guides the network to focus on samples that have both high localization accuracy and high confidence. Furthermore, we introduce a new dataset called GF1-LRSD collected from the Gaofen–1 satellite for tiny ship detection in low-resolution RSIs. The resolution of images is 16m and the mean size of objects is about 10.9 pixels, which are much smaller than public RSI datasets. Extensive experiments on GF1-LRSD and DOTA-Ship show that our method outperforms several competitors, proving its effectiveness and generality.

  • Research Article
  • Cite Count Icon 6
  • 10.1109/jstars.2022.3206085
Object Detection in Large-Scale Remote Sensing Images With a Distributed Deep Learning Framework
  • Jan 1, 2022
  • IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
  • Linkai Liu + 6 more

With the accumulation and storage of remote sensing images in various satellite data centers, the rapid detection of objects of interest from large-scale remote sensing images is a current research focus and application requirement. Although some cutting-edge object detection algorithms in remote sensing images perform well in terms of accuracy (mAP), their inference speed is slow and requires high hardware requirements that are not suitable for real-time object detection in large-scale remote sensing images. To address this issue, we propose a fast inference framework for object detection in large-scale remote sensing images. On the one hand, we introduce <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\alpha$</tex-math></inline-formula> -IoU Loss on the YWCSL model to implement adaptive weighted loss and gradient, which achieves 64.62% and 79.54% mAP on DIOR-R and DOTA test sets, respectively. More importantly, the inference speed of the YWCSL model reaches 60.74 FPS on a single NVIDIA GeForce RTX 3080 Ti, which is 2.87 times faster than the current state-of-the-art one-stage detector S <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$^{2}$</tex-math></inline-formula> A-Net. On the other hand, we build a distributed inference framework to enable fast inference on large-scale remote sensing images. Specifically, we save the images on HDFS for distributed storage and deploy the pre-trained YWCSL model on the Spark cluster. In addition, we use a custom partitioner RankPartition to repartition the data to further improve the performance of the cluster. When using 5 nodes, the speedup of the cluster reaches 9.54, which is 90.80% higher than the theoretical linear speedup (5.00). Our distributed inference framework for large-scale remote sensing images significantly reduces the dependence of object detection on expensive hardware resources, which has important research significance for the wide application of object detection in remote sensing images.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 9
  • 10.1109/access.2022.3141059
Multi-Size Object Detection in Large Scene Remote Sensing Images Under Dual Attention Mechanism
  • Jan 1, 2022
  • IEEE Access
  • Jinkang Wang + 5 more

The remote sensing images in large scenes have a complex background, and the types, sizes, and postures of the targets are different, making object detection in remote sensing images difficult. To solve this problem, an end-to-end multi-size object detection method based on a dual attention mechanism is proposed in this paper. First, the MobileNets backbone network is used to extract multi-layer features of remote sensing images as the input of MFCA, a multi-size feature concentration attention module. MFCA employs an attention mechanism to suppress noise, enhance effective feature reuse, and improve the adaptability of the network to multi-size target features through multi-layer convolution operation. Then, TSDFF (two-stage deep feature fusion module)deeply fuses the feature maps output by MFCA to maximize the correlation between the feature sets and especially improve the feature expression of small targets. Next, the GLCNet (global-local context network) and the SSA (significant simple attention module) distinguish the fused features and screen out useful channel information, which makes the detected features more representative. Finally, the loss function is improved to truly reflect the difference between the candidate frames and the real frames, enhancing the network’s ability to predict complex samples. The performance of our proposed method is compared with other advanced algorithms on NWPU VHR-10, DOTA, RSOD open datasets. Experimental results show that our proposed method achieves the best AP (average precision) and mAP (mean average precision), indicating that the method can accurately detect multi-type, multi-size, and multi-posture targets with high adaptability.

  • Research Article
  • Cite Count Icon 2
  • 10.1109/tgrs.2023.3282967
MRPFA-Net for Shadow Detection in Remote-Sensing Images
  • Jan 1, 2023
  • IEEE Transactions on Geoscience and Remote Sensing
  • Jing Zhang + 4 more

The presence of shadows in high-resolution (HR) remote-sensing images reduces object detection accuracy. To address this problem, in this paper, we proposed a deep neural network algorithm for shadow detection by using the AISD and SSAD remote-sensing shadow image datasets. To improve the ability to extract spatial information from feature maps, we developed a cross-spatial attention module that focuses on semantic information in the horizontal and vertical directions at each position point on the remote-sensing image. This module overcomes the limitations of existing technologies in accurately judging small areas and suspected shadow areas and in missing or incorrectly detected shadow areas. In addition, to improve the ability to extract shadow features and the accuracy of shadow detection in remote-sensing images, we developed a channel attention module that assigns more attention to channels that conform to the shadow color characteristics. The network architecture comprises an encoder – decoder structure, with ResNeXt50 used as the backbone for the encoder and a multi resolution parallel fusion (MRPF) designed for the decoder; cross-spatial and channel attention were incorporated into the decoder unit. Experimental results demonstrated the superior performance of the proposed algorithm, with an F1 score of 92.6% for the shadow category on the test set, thus, outperforming other algorithms and making the proposed method an effective solution for shadow detection in HR remote-sensing images.

  • Research Article
  • 10.1088/1742-6596/3055/1/012023
OSSOD-RS: Towards Open-set Semi-Supervised Object Detection In Remote Sensing
  • Jul 1, 2025
  • Journal of Physics: Conference Series
  • Peisong Tang + 1 more

Object detection in Remote Sensing Images (RSIs) has achieved remarkable progress with the help of deep learning. However, existing methods typically rely on large-scale annotated datasets, which are expensive and labor-intensive to acquire, especially due to the densely distributed small objects and complex backgrounds in RSIs. Semi-Supervised Object Detection (SSOD) offers a promising alternative by leveraging unlabeled data, yet most SSOD approaches assume a closed-set scenario, where labeled and unlabeled data share the same category distribution. This assumption often fails in real-world RSI settings, where uncurated data may include Out-of-Distribution (OOD) instances, leading to degraded performance if not properly addressed. In this paper, we propose a novel end-to-end framework for Open-set Semi-Supervised Object Detection (OSSOD) in RSIs, which is built upon the Unbiased Teacher architecture and specifically designed for small-object detection. The framework introduces three key components: (1) Adaptive Class-wise Feature Memory Buffer that dynamically stores and manages class-specific features to effectively filter OOD instances; (2) Channel-aware Dynamic Multi-path Fusion (CDMF) Module to enhance the representation of small objects across multiple semantic levels; and (3) Siamese OOD Head combined with a Multi-threshold Region-based Pseudo-label Filtering Strategy to refine noisy pseudo-labels efficiently and robustly. Experiments on two challenging RSI benchmarks demonstrate that our method consistently outperforms state-of-the-art SSOD baselines in terms of accuracy.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/dicta51227.2020.9363420
M2-Net: A Multi-scale Multi-level Feature Enhanced Network for Object Detection in Optical Remote Sensing Images
  • Nov 29, 2020
  • Xinhai Ye + 4 more

Object detection in remote sensing images is a challenging task due to diversified orientation, complex background, dense distribution and scale variation of objects. In this paper, we tackle this problem by proposing a novel multi-scale multi-level feature enhanced network ( $M$ 2-Net) that integrates a Feature Map Enhancement (FME) module and a Feature Fusion Block (FFB) into Rotational RetinaNet. The FME module aims to enhance the weak features by factorizing the convolutional operation into two similar branches instead of one single branch, which helps to broaden receptive field with less parameters. This module is embedded into different layers in the backbone network to capture multi-scale semantics and location information for detection. The FFB module is used to shorten the information propagation path between low-level high-resolution features in shallow layers and high-level semantic features in deep layers, facilitating more effective feature fusion and object detection especially those with small sizes. Experimental results on three benchmark datasets show that our method not only outperforms many one-stage detectors but also achieves competitive accuracy with lower time cost than two-stage detectors.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.1109/access.2020.3024720
Deep Hash Assisted Network for Object Detection in Remote Sensing Images
  • Jan 1, 2020
  • IEEE Access
  • Min Wang + 5 more

Remote Sensing Images (RSIs) often have extremely wide width and abundant terrain. In order to achieve rapid object detection in large RSIs, in this paper, a Deep Hash Assisted Network (DHAN) is constructed by introducing a hashing encoding of images in a two-stage deep neural network. Different with the available detection networks, DHAN first locates candidate object regions and then transfers the learned features to another Region Proposal Network (RPN) for detection. On the one hand, it can avoid the calculations on the background irrelevant to objects. On the other hand, the built hash encoding layer in DHAN can accelerate the detection via binary hash features. Moreover, a self attention layer is designed and combined with the convolution layer, to distinguish relatively small objects regions from a very large scene. The proposed method is tested on several public data sets, and the comparison results show that DHAN can remarkably improve the detection efficiency on large RSIs and simultaneously achieve high detection accuracy.

  • Research Article
  • Cite Count Icon 24
  • 10.1109/tgrs.2021.3105575
SAENet: Self-Supervised Adversarial and Equivariant Network for Weakly Supervised Object Detection in Remote Sensing Images
  • Jan 1, 2022
  • IEEE Transactions on Geoscience and Remote Sensing
  • Xiaoxu Feng + 4 more

Weakly supervised object detection (WSOD) in remote sensing images (RSIs) remains a challenge when learning a subtle object detection model with only image-level annotations. Most works tend to optimize the detection model via exploiting the most contributed region, thereby to be dominated by the most discriminative part of an object. Meanwhile, these methods ignore the consistency across different spatial transformations of the same image and always label them with different classes, which introduces potential ambiguities. To tackle these challenges, we propose a unique self-supervised adversarial and equivariant network (SAENet) and aim at learning complementary and consistent visual patterns for WSOD in RSIs. To this end, an adversarial dropout–activation block is first designed to facilitate the entire object detector via adaptively hiding the discriminative parts and highlighting the instance-related regions. Besides, we further introduce a flexible self-supervised transformation equivariance mechanism on each potential instance from multiple spatial transformations to obtain spatially consistent self-supervisions. Accordingly, the obtained supervisions can be leveraged to pursue a more robust and spatially consistent object detector. Comprehensive experiments on the challenging LEarning, VIsion and Remote sensing Laboratory (LEVIR), NorthWestern Polytechnical University (NWPU) VHR-10.v2, and detection in optical RSIs (DIOR) datasets validate that SAENet outperforms the previous state-of-the-art works and achieves 46.2%, 60.7%, and 27.1% mAP, respectively.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.3390/app12063114
Feature Enhanced Anchor-Free Network for School Detection in High Spatial Resolution Remote Sensing Images
  • Mar 18, 2022
  • Applied Sciences
  • Han Fu + 5 more

Object detection in remote sensing images (RSIs) is currently one of the most important topics, which can promote the understanding of the earth and better serve for the construction of digital earth. In addition to single objects, there are many composite objects in RSIs, especially primary and secondary schools (PSSs), which are composed of some parts and surrounded by complex background. Existing deep learning methods have difficulty detecting composite objects effectively. In this article, we propose a feature enhanced Network (FENet) based on an anchor-free method for PSSs detection. FENet can not only realize more accurate pixel-level detection based on enhanced features but also simplify the training process by avoiding hyper-parameters. First, an enhanced feature module (EFM) is designed to improve the representation ability of complex features. Second, a context-aware strategy is used for alleviating the interference of background information. In addition, complete intersection over union (CIoU) loss is employed for bounding box regression, which can obtain better convergence speed and accuracy. At the same time, we build a PSSs dataset for composite object detection. This dataset contains 1685 images of PSSs in Beijing–Tianjin–Hebei region. Experimental results demonstrate that FENet outperforms several object detectors and achieves 78.7% average precision. The study demonstrates the advantage of our proposed method on PSSs detection.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 40
  • 10.3390/rs14153735
Attention-Based Multi-Level Feature Fusion for Object Detection in Remote Sensing Images
  • Aug 4, 2022
  • Remote Sensing
  • Xiaohu Dong + 5 more

We study the problem of object detection in remote sensing images. As a simple but effective feature extractor, Feature Pyramid Network (FPN) has been widely used in several generic vision tasks. However, it still faces some challenges when used for remote sensing object detection, as the objects in remote sensing images usually exhibit variable shapes, orientations, and sizes. To this end, we propose a dedicated object detector based on the FPN architecture to achieve accurate object detection in remote sensing images. Specifically, considering the variable shapes and orientations of remote sensing objects, we first replace the original lateral connections of FPN with Deformable Convolution Lateral Connection Modules (DCLCMs), each of which includes a 3Ă—3 deformable convolution to generate feature maps with deformable receptive fields. Additionally, we further introduce several Attention-based Multi-Level Feature Fusion Modules (A-MLFFMs) to integrate the multi-level outputs of FPN adaptively, further enabling multi-scale object detection. Extensive experimental results on the DIOR dataset demonstrated the state-of-the-art performance achieved by the proposed method, with the highest mean Average Precision (mAP) of 73.6%.

  • Conference Article
  • Cite Count Icon 1
  • 10.1145/3448823.3448824
Region-Attentioned Network with Location Scoring Dynamic-Threshold NMS for Object Detection in Remote Sensing Images
  • Dec 9, 2020
  • Wei Guo + 3 more

Object detection in remote sensing images is of importance in the field of computer vision. Although many advanced methods have succeeded in natural images, the progress in remote sensing images is relatively slow due to the complex backgrounds, vertical views, and variations in kind and size of the objects. To solve these problems, we propose a region-attentioned network with location scoring dynamic-threshold NMS for object detection in remote sensing images. In particular, we first introduce the saliency constraint and propose a region-attentioned network (RANet) to effectively enhance the object regions for better detection. Meanwhile, the proposed network adopts a feature pyramid, which fully uses the low-level and high-level features, to improve the ability for handling the multiscale objects. Then, considering that there are many dense objects in remote sensing images, we propose a novel dynamic-threshold NMS (DTNMS) method for overlap detection elimination, which is more reasonable and efficient than the traditional NMS method. In addition, we further employ the IoU header to obtain the location information of the predicted boxes and propose location scoring dynamic-threshold NMS (LSDTNMS), which can further improve the detection performance. Due to the prediction of the target mask in RANet, we can also obtain the detection results of the rotating bounding box. To verify the effectiveness of the proposed method, we execute comparative experiments on the remote sensing public dataset and the experimental results demonstrate that the proposed method significantly outperforms state-of-the-art methods.

  • Book Chapter
  • Cite Count Icon 7
  • 10.1007/978-3-030-22808-8_39
Ship Segmentation and Orientation Estimation Using Keypoints Detection and Voting Mechanism in Remote Sensing Images
  • Jan 1, 2019
  • Mingxian Nie + 2 more

Ship detection in remote sensing images is an important and challenging task in civil fields. However, the various types of ships with different scale and ratio and the complex scenarios are the main bottlenecks for ship detection and orientation estimation of the ship. In this paper, we propose a new method based on Mask R-CNN, which can perform ship segmentation and direction estimation on ships at the same time by simultaneously output the binary mask and the bow and sterns keypoints locations. We can achieve keypoints detection of the ship without significantly losing the accuracy of the mask. Finally, we regress the coordinates of the ship’s bow and sterns to four quadrants and use the voting mechanism to determine which quadrant the bow keypoint locates. Then we combine the quadrant of bow keypoint with the minimum bounding box of the mask to determine the final orientation of the ship. Experiments on the datasets have achieved effective performance.

  • Research Article
  • Cite Count Icon 42
  • 10.1109/lgrs.2019.2909541
Improved Faster R-CNN With Multiscale Feature Fusion and Homography Augmentation for Vehicle Detection in Remote Sensing Images
  • Nov 1, 2019
  • IEEE Geoscience and Remote Sensing Letters
  • Hong Ji + 3 more

Vehicle detection in remote sensing images has attracted remarkable attention for its important role in a variety of applications in traffic, security, and military fields. Motivated by the stunning success of region convolutional neural network (R-CNN) techniques, which have achieved the state-of-the-art performance in object detection task on benchmark data sets, we propose to improve the Faster R-CNN method with better feature extraction, multiscale feature fusion, and homography data augmentation to realize vehicle detection in remote sensing images. Extensive experiments on representative remote sensing data sets related to vehicle detection demonstrate that our method achieves better performance than the state-of-the-art approaches. The source code will be made available (after the review process).

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 12
  • 10.1109/access.2020.2982658
Adaptive Anchor Networks for Multi-Scale Object Detection in Remote Sensing Images
  • Jan 1, 2020
  • IEEE Access
  • Miaohui Zhang + 4 more

Accurate and effective object detection in remote sensing images plays an extremely important role in marine transport, environmental monitoring and military operations. Due to the powerful ability of feature representation, region-based convolutional neural networks (RCNNs) have been widely used in this field, which firstly generate candidate regions through extracted feature maps and then classify and locate objects. However, most of existing methods generally use traditional backbone networks to extract feature maps with a decreased spatial resolution because of the continuous down-sampling, which will weaken the information detected from small objects. Besides, sliding windows strategy is employed in these methods to generate fixed anchors with a preset scale on feature maps, which is inappropriate for multi-scale object detection in remote sensing images. To solve the above problems, a novel and effective object detection framework named DetNet-FPN (Feature Pyramid Network) is proposed in this paper, in which a feature pyramid with strong feature representation is created by combining feature maps of different spatial resolution, at the same time, the resolution of feature maps is maintained by involving dilation convolutions. Furthermore, to match the proposed backbone, the GA (Guided Anchoring)-RPN strategy is improved for adaptive anchor generation, this strategy simultaneously predicts the locations where the center of objects are likely to exist as well as the scales and aspect ratios at different locations. Extensive experiments and comprehensive evaluations demonstrate the effectiveness of the proposed framework on DOTA and NWPU VHR-10 datasets.

More from: IEEE Transactions on Geoscience and Remote Sensing
  • Research Article
  • 10.1109/tgrs.2025.3530356
An Autoencoder Architecture for L-Band Passive Microwave Retrieval of Landscape Freeze-Thaw Cycle
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Divya Kumawat + 4 more

  • Research Article
  • Cite Count Icon 1
  • 10.1109/tgrs.2025.3555232
A Framework for Indeterministic Model-Based Microtremor Inversion to Estimate Mechanical Properties of Rock Mass in Tectonic Mélange Area
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Xingliang Peng + 2 more

  • Research Article
  • 10.1109/tgrs.2025.3617776
Data-Driven Multi-Satellite Orbit Selection for Nighttime Wildfire Remote Sensing
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Jonathan Sipps + 1 more

  • Research Article
  • 10.1109/tgrs.2025.3592484
A Semantic-Guided Framework for Few-Shot Remote Sensing Object Detection
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Chenchen Sun + 4 more

  • Research Article
  • 10.1109/tgrs.2024.3521516
KDGraph: A Keypoint Detection Method for Road Graph Extraction From Remote Sensing Images
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Wei He + 4 more

  • Research Article
  • 10.1109/tgrs.2025.3563095
Integrating GNSS and GRACE Observations to Investigate Water Storage Variations Across Different Climatic Regions of China
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Tao Chen + 3 more

  • Research Article
  • 10.1109/tgrs.2025.3564855
HSFormer: Multiscale Hybrid Sparse Transformer for Uncertainty-Aware Cloud and Shadow Removal
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Changqi Sun + 5 more

  • Research Article
  • 10.1109/tgrs.2025.3560717
DP-BICNN: A Bidirectional Information Compensation Neural Network Coupled With Data-Driven and Physical Information for Sea Surface Temperature Prediction
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Xiong Liu + 6 more

  • Research Article
  • 10.1109/tgrs.2025.3542422
Community Structure Guided Network for Hyperspectral Image Classification
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Qingwang Wang + 5 more

  • Research Article
  • Cite Count Icon 5
  • 10.1109/tgrs.2024.3521035
Transformer-Based Cross-Domain Few-Shot Learning for Hyperspectral Target Detection
  • Jan 1, 2025
  • IEEE Transactions on Geoscience and Remote Sensing
  • Shou Feng + 6 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon