Mutual learning with discrepancy for weakly supervised object detection

  • Abstract
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Mutual learning with discrepancy for weakly supervised object detection

ReferencesShowing 10 of 16 papers
  • Open Access Icon
  • Cite Count Icon 357
  • 10.1109/tpami.2018.2876304
PCL: Proposal Cluster Learning for Weakly Supervised Object Detection.
  • Oct 16, 2018
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Peng Tang + 6 more

  • Cite Count Icon 1
  • 10.1016/j.knosys.2025.113012
Pseudo-label enhancement for weakly supervised object detection using self-supervised vision transformer
  • Feb 1, 2025
  • Knowledge-Based Systems
  • Kequan Yang + 4 more

  • Cite Count Icon 12
  • 10.1109/tip.2024.3402981
Misclassification in Weakly Supervised Object Detection.
  • Jan 1, 2024
  • IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
  • Zhihao Wu + 3 more

  • Cite Count Icon 3
  • 10.1007/s10489-023-04956-z
IDO: Instance dual-optimization for weakly supervised object detection
  • Aug 29, 2023
  • Applied Intelligence
  • Zhida Ren + 2 more

  • Cite Count Icon 41
  • 10.1109/tip.2021.3056887
Pyramidal Multiple Instance Detection Network With Mask Guided Self-Correction for Weakly Supervised Object Detection.
  • Jan 1, 2021
  • IEEE Transactions on Image Processing
  • Yunqiu Xu + 4 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 4
  • 10.1109/tnnls.2024.3395751
Negative Deterministic Information-Based Multiple Instance Learning for Weakly Supervised Object Detection and Segmentation.
  • Apr 1, 2025
  • IEEE Transactions on Neural Networks and Learning Systems
  • Guanchun Wang + 6 more

  • Cite Count Icon 106
  • 10.1016/j.patcog.2020.107610
Efficient densely connected convolutional neural networks
  • Aug 20, 2020
  • Pattern Recognition
  • Guoqing Li + 4 more

  • Open Access Icon
  • Cite Count Icon 42
  • 10.24963/ijcai.2018/135
Collaborative Learning for Weakly Supervised Object Detection
  • Jul 1, 2018
  • Jiajie Wang + 3 more

  • Cite Count Icon 10
  • 10.1016/j.knosys.2022.109571
OGCNet: Overlapped group convolution for deep convolutional neural networks
  • Aug 1, 2022
  • Knowledge-Based Systems
  • Guoqing Li + 3 more

  • Open Access Icon
  • Cite Count Icon 595
  • 10.1109/tpami.2016.2537320
Multiscale Combinatorial Grouping for Image Segmentation and Object Proposal Generation.
  • Mar 2, 2016
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Jordi Pont-Tuset + 4 more

Similar Papers
  • Research Article
  • Cite Count Icon 34
  • 10.1109/tip.2022.3223216
MGL: Mutual Graph Learning for Camouflaged Object Detection.
  • Jan 1, 2023
  • IEEE Transactions on Image Processing
  • Qiang Zhai + 6 more

Camouflaged object detection, which aims to detect/segment the object(s) that blend in with their surrounding, remains challenging for deep models due to the intrinsic similarities between foreground objects and background surroundings. Ideally, an effective model should be capable of finding valuable clues from the given scene and integrating them into a joint learning framework to co-enhance the representation. Inspired by this observation, we propose a novel Mutual Graph Learning (MGL) model by shifting the conventional perspective of mutual learning from regular grids to graph domain. Specifically, an image is decoupled by MGL into two task-specific feature maps - one for finding the rough location of the target and the other for capturing its accurate boundary details. Then, the mutual benefits can be fully exploited by reasoning their high-order relations through graphs recurrently. It should be noted that our method is different from most mutual learning models that model all between-task interactions with the use of a shared function. To increase information interactions, MGL is built with typed functions for dealing with different complementary relations. To overcome the accuracy loss caused by interpolation to higher resolution and the computational redundancy resulting from recurrent learning, the S-MGL is equipped with a multi-source attention contextual recovery module, called R-MGL_v2, which uses the pixel feature information iteratively. Experiments on challenging datasets, including CHAMELEON, CAMO, COD10K, and NC4K demonstrate the effectiveness of our MGL with superior performance to existing state-of-the-art methods. The code can be found at https://github.com/fanyang587/MGL.

  • Conference Article
  • Cite Count Icon 235
  • 10.1109/cvpr46437.2021.01280
Mutual Graph Learning for Camouflaged Object Detection
  • Jun 1, 2021
  • Qiang Zhai + 5 more

Automatically detecting/segmenting object(s) that blend in with their surroundings is difficult for current models. A major challenge is that the intrinsic similarities between such foreground objects and background surroundings make the features extracted by deep model indistinguishable. To overcome this challenge, an ideal model should be able to seek valuable, extra clues from the given scene and incorporate them into a joint learning framework for representation co-enhancement. With this inspiration, we design a novel Mutual Graph Learning (MGL) model, which generalizes the idea of conventional mutual learning from regular grids to the graph domain. Specifically, MGL decouples an image into two task-specific feature maps — one for roughly locating the target and the other for accurately capturing its boundary details — and fully exploits the mutual benefits by recurrently reasoning their high-order relations through graphs. Importantly, in contrast to most mutual learning approaches that use a shared function to model all between-task interactions, MGL is equipped with typed functions for handling different complementary relations to maximize information interactions. Experiments on challenging datasets, including CHAMELEON, CAMO and COD10K, demonstrate the effectiveness of our MGL with superior performance to existing state-of-the-art methods. Code is available at https://github.com/fanyang587/MGL.

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/isbi52829.2022.9761511
Augmenting Knowledge Distillation with Peer-to-Peer Mutual Learning for Model Compression
  • Mar 28, 2022
  • Usma Niyaz + 1 more

Knowledge distillation (KD) is an effective model compression technique where a compact student network is taught to mimic the behavior of a complex and highly trained teacher network. In contrast, Mutual Learning (ML) provides an alternative strategy where multiple simple student networks benefit from sharing knowledge, even in the absence of a powerful but static teacher network. Motivated by these findings, we propose a single-teacher, multi-student framework that leverages both KD and ML to achieve better performance. Furthermore, an online distillation strategy is utilized to train the teacher and students simultaneously. To evaluate the performance of the proposed approach, extensive experiments were conducted using three different versions of teacher-student networks on benchmark biomedical classification (MSI vs. MSS) and object detection (Polyp Detection) tasks. Ensemble of student networks trained in the proposed manner achieved better results than the ensemble of students trained using KD or ML individually, establishing the benefit of augmenting knowledge transfer from teacher to students with peer-to-peer learning between students.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/iske47853.2019.9170207
Mutual Constraint Learning for Weakly Supervised Object Detection
  • Nov 1, 2019
  • Yongsheng Liu + 6 more

The abundance of image-level labels and the lack of large scale bounding boxes detailed annotations promotes the expansion of Weakly-Supervised techniques for Object Detection (WSOD). In this work, we propose a novel mutual constraint learning for convolutional neural networks applied to detect bounding boxes only with global image-level supervision. The essence of our architecture is two new differentiable modules, Determination Network, and Parameterised Spatial Division, which explicitly allows the spatial division of the feature map within the network. These learnable modules give neural networks the ability to constructively generate shadow activation maps, dependent on the class activation maps. To demonstrate the effectiveness of our model for WSOD, we conduct extensive experiments on the multi-MNIST dataset. Experimental results show that mutual constraint learning can (i) help improve the accuracy of multi-category tasks, (ii) implement in an end-to-end way only with the image-level annotations, and (iii) output accurate bounding box labels, thereby achieving object detection.

  • Research Article
  • Cite Count Icon 17
  • 10.1609/aaai.v33i01.33018110
MR-NET: Exploiting Mutual Relation for Visual Relationship Detection
  • Jul 17, 2019
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Yi Bin + 5 more

Inferring the interactions between objects, a.k.a visual relationship detection, is a crucial point for vision understanding, which captures more definite concepts than object detection. Most previous work that treats the interaction between a pair of objects as a one way fail to exploit the mutual relation between objects, which is essential to modern visual application. In this work, we propose a mutual relation net, dubbed MR-Net, to explore the mutual relation between paired objects for visual relationship detection. Specifically, we construct a mutual relation space to model the mutual interaction of paired objects, and employ linear constraint to optimize the mutual interaction, which is called mutual relation learning. Our mutual relation learning does not introduce any parameters, and can adapt to improve the performance of other methods. In addition, we devise a semantic ranking loss to discriminatively penalize predicates with semantic similarity, which is ignored by traditional loss function (e.g., cross entropy with softmax). Then, our MR-Net optimizes the mutual relation learning together with semantic ranking loss with a siamese network. The experimental results on two commonly used datasets (VG and VRD) demonstrate the superior performance of the proposed approach.

  • Conference Article
  • Cite Count Icon 228
  • 10.1109/cvpr.2019.00834
A Mutual Learning Method for Salient Object Detection With Intertwined Multi-Supervision
  • Jun 1, 2019
  • Runmin Wu + 5 more

Though deep learning techniques have made great progress in salient object detection recently, the predicted saliency maps still suffer from incomplete predictions due to the internal complexity of objects and inaccurate boundaries caused by strides in convolution and pooling operations. To alleviate these issues, we propose to train saliency detection networks by exploiting the supervision from not only salient object detection, but also foreground contour detection and edge detection. First, we leverage salient object detection and foreground contour detection tasks in an intertwined manner to generate saliency maps with uniform highlight. Second, the foreground contour and edge detection tasks guide each other simultaneously, thereby leading to preciser foreground contour prediction and reducing the local noises for edge prediction. In addition, we develop a novel mutual learning module (MLM) which serves as the building block of our method. Each MLM consists of multiple network branches trained in a mutual learning manner, which improves the performance by a large margin. Extensive experiments on seven challenging datasets demonstrate that the proposed method has delivered state-of-the-art results in both salient object detection and edge detection.

  • Book Chapter
  • Cite Count Icon 56
  • 10.1007/978-3-030-58452-8_18
MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution
  • Jan 1, 2020
  • Taojiannan Yang + 5 more

We propose the width-resolution mutual learning method (MutualNet) to train a network that is executable at dynamic resource constraints to achieve adaptive accuracy-efficiency trade-offs at runtime. Our method trains a cohort of sub-networks with different widths (i.e., number of channels in a layer) using different input resolutions to mutually learn multi-scale representations for each sub-network. It achieves consistently better ImageNet top-1 accuracy over the state-of-the-art adaptive network US-Net under different computation constraints, and outperforms the best compound scaled MobileNet in EfficientNet by 1.5%. The superiority of our method is also validated on COCO object detection and instance segmentation as well as transfer learning. Surprisingly, the training strategy of MutualNet can also boost the performance of a single network, which substantially outperforms the powerful AutoAugmentation in both efficiency (GPU search hours: 15000 vs. 0) and accuracy (ImageNet: 77.6% vs. 78.6%). Code is available at https://github.com/taoyang1122/MutualNet.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/electronics13142690
MLBSNet: Mutual Learning and Boosting Segmentation Network for RGB-D Salient Object Detection
  • Jul 10, 2024
  • Electronics
  • Chenxing Xia + 2 more

RGB-D saliency object detection (SOD) primarily segments the most salient objects from a given scene by fusing RGB images and depth maps. Due to the inherent noise in the original depth map, fusion failures may occur, leading to performance bottlenecks. To address this issue, this paper proposes a mutual learning and boosting segmentation network (MLBSNet) for RGB-D saliency object detection, which consists of a deep optimization module (DOM), a semantic alignment module (SAM), a cross-modal integration (CMI) module, and a separate reconstruct decoder (SRD). Specifically, the deep optimization module aims to obtain optimal depth information by learning the similarity between the original and predicted depth maps. To eliminate the uncertainty of single-modal neighboring features and capture the complementary features of multiple modalities, a semantic alignment module and a cross-modal integration module are introduced. Finally, a separate reconstruct decoder based on a multi-source feature integration mechanism is constructed to overcome the accuracy loss caused by segmentation. Through comparative experiments, our method outperforms 13 existing methods on five RGB-D datasets and achieves excellent performance on four evaluation metrics.

  • Research Article
  • 10.1109/tnnls.2025.3605710
Adaptive Modality Balanced Online Knowledge Distillation for Brain-Eye-Computer-Based Dim Object Detection.
  • Sep 15, 2025
  • IEEE transactions on neural networks and learning systems
  • Zixing Li + 6 more

Advanced cognition can be measured from the human brain using brain-computer interfaces (BCIs). Integrating these interfaces with computer vision techniques, which possess efficient feature extraction capabilities, can achieve more robust and accurate detection of dim targets in aerial images. However, existing target detection methods primarily concentrate on homogeneous data, lacking efficient and versatile processing capabilities for heterogeneous multimodal data. In this article, we first build a brain-eye-computer-based object detection system for aerial images under few-shot conditions. This system detects suspicious targets using region proposal networks (RPNs), evokes the event-related potential (ERP) signal in electroencephalogram (EEG) through the eye-tracking-based slow serial visual presentation (ESSVP) paradigm, and constructs the EEG-image data pairs with eye movement data. Then, an adaptive modality balanced online knowledge distillation (AMBOKD) method is proposed to recognize dim objects with the EEG-image data. AMBOKD fuses EEG and image features using a multihead attention module, establishing a new modality with comprehensive features. To enhance the performance and robust capability of the fusion modality, simultaneous training and mutual learning between modalities are enabled by end-to-end online KD (OKD). During the learning process, an adaptive modality balancing module is proposed to ensure multimodal equilibrium by dynamically adjusting the weights of the importance and the training gradients across various modalities. The effectiveness and superiority of our method are demonstrated by comparing it with existing state-of-the-art methods. Additionally, experiments conducted on public datasets and real-world scenarios demonstrate the reliability and practicality of the proposed system and the designed method. The dataset and the source code can be found at: https://github.com/lizixing23/AMBOKD.

  • Research Article
  • Cite Count Icon 17
  • 10.1109/tpami.2021.3138389
MutualNet: Adaptive ConvNet via Mutual Learning From Different Model Configurations.
  • Jan 1, 2023
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Taojiannan Yang + 8 more

Most existing deep neural networks are static, which means they can only perform inference at a fixed complexity. But the resource budget can vary substantially across different devices. Even on a single device, the affordable budget can change with different scenarios, and repeatedly training networks for each required budget would be incredibly expensive. Therefore, in this work, we propose a general method called MutualNet to train a single network that can run at a diverse set of resource constraints. Our method trains a cohort of model configurations with various network widths and input resolutions. This mutual learning scheme not only allows the model to run at different width-resolution configurations but also transfers the unique knowledge among these configurations, helping the model to learn stronger representations overall. MutualNet is a general training methodology that can be applied to various network structures (e.g., 2D networks: MobileNets, ResNet, 3D networks: SlowFast, X3D) and various tasks (e.g., image classification, object detection, segmentation, and action recognition), and is demonstrated to achieve consistent improvements on a variety of datasets. Since we only train the model once, it also greatly reduces the training cost compared to independently training several models. Surprisingly, MutualNet can also be used to significantly boost the performance of a single network, if dynamic resource constraints are not a concern. In summary, MutualNet is a unified method for both static and adaptive, 2D and 3D networks. Code and pre-trained models are available at https://github.com/taoyang1122/MutualNet.

  • Conference Article
  • Cite Count Icon 143
  • 10.1109/cvpr52688.2022.00743
Cross-Domain Adaptive Teacher for Object Detection
  • Jun 1, 2022
  • Yu-Jhe Li + 8 more

We address the task of domain adaptation in object detection, where there is an obvious domain gap between a domain with annotations (source) and a domain of interest without annotations (target). As a popular semi-supervised learning method, the teacher-student framework (a student model is supervised by the pseudo labels from a teacher model) has also yielded a large accuracy gain in cross-domain object detection. However, it suffers from the domain shift and generates many low-quality pseudo labels (e.g., false positives), which leads to sub-optimal performance. To mitigate this problem, we propose a teacher-student framework named Adaptive Teacher (AT) which leverages domain adversarial learning and weak-strong data augmentation to address the domain gap. Specifically, we employ feature-level adversarial training in the student model, allowing features derived from the source and target domains to share similar distributions. This process ensures the student model produces domain-invariant features. Furthermore, we apply weak-strong augmentation and mutual learning between the teacher model (taking data from the target domain) and the student model (taking data from both domains). This enables the teacher model to learn the knowledge from the student model without being biased to the source domain. We show that AT demonstrates superiority over existing approaches and even Oracle (fully-supervised) models by a large margin. For example, we achieve 50.9% (49.3%) mAP on Foggy Cityscape (Cli-part1K), which is 9.2% (5.2%) and 8.2% (11.0%) higher than previous state-of-the-art and Oracle, respectively.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.3389/fpls.2024.1369696
Mutual learning with memory for semi-supervised pest detection.
  • Jun 17, 2024
  • Frontiers in plant science
  • Jiale Zhou + 6 more

Effectively monitoring pest-infested areas by computer vision is essential in precision agriculture in order to minimize yield losses and create early scientific preventative solutions. However, the scale variation, complex background, and dense distribution of pests bring challenges to accurate detection when utilizing vision technology. Simultaneously, supervised learning-based object detection heavily depends on abundant labeled data, which poses practical difficulties. To overcome these obstacles, in this paper, we put forward innovative semi-supervised pest detection, PestTeacher. The framework effectively mitigates the issues of confirmation bias and instability among detection results across different iterations. To address the issue of leakage caused by the weak features of pests, we propose the Spatial-aware Multi-Resolution Feature Extraction (SMFE) module. Furthermore, we introduce a Region Proposal Network (RPN) module with a cascading architecture. This module is specifically designed to generate higher-quality anchors, which are crucial for accurate object detection. We evaluated the performance of our method on two datasets: the corn borer dataset and the Pest24 dataset. The corn borer dataset encompasses data from various corn growth cycles, while the Pest24 dataset is a large-scale, multi-pest image dataset consisting of 24 classes and 25k images. Experimental results demonstrate that the enhanced model achieves approximately 80% effectiveness with only 20% of the training set supervised in both the corn borer dataset and Pest24 dataset. Compared to the baseline model SoftTeacher, our model improves mAP @0.5 (mean Average Precision) at 7.3 compared to that of SoftTeacher at 4.6. This method offers theoretical research and technical references for automated pest identification and management.

  • PDF Download Icon
  • Research Article
  • 10.54097/ajst.v5i3.7806
Improved Regional Proposal Generation and Proposal Selection Method for Weakly Supervision Object detection
  • Apr 24, 2023
  • Academic Journal of Science and Technology
  • Yujiao Wang + 1 more

In recent years, object detection has made great progress with the continuous development of deep neural network. At present, there are many different fully supervised object detection algorithms in the field of computer vision, which are basically saturated, while object detection in a weakly supervised manner is more challenging than strongly supervised object detection. Since nowadays mature object detection algorithms rely heavily on strongly labeled datasets, but strong labeled datasets are very expensive and require huge datasets to support in order to train a better object detection model, weakly supervised object detection has received more and more attention. In this paper, a new module can be embedded in the framework of weakly supervised object detection, three modules are introduced into the weakly supervised object detection framework, which is used to generate high-quality proposals and screen these proposals, and finally selecting more accurate proposal boxes that are beneficial for subsequent training, and demonstrate their effectiveness on the PASCAL VOC2007 and PASCAL VOC2012 datasets, in which this paper achieves a significant improvement over the existing classic weakly supervised object detection algorithms with significant improvements.

  • Conference Article
  • Cite Count Icon 239
  • 10.1109/cvpr.2018.00139
Multi-evidence Filtering and Fusion for Multi-label Classification, Object Detection and Semantic Segmentation Based on Weakly Supervised Learning
  • Jun 1, 2018
  • Weifeng Ge + 2 more

Supervised object detection and semantic segmentation require object or even pixel level annotations. When there exist image level labels only, it is challenging for weakly supervised algorithms to achieve accurate predictions. The accuracy achieved by top weakly supervised algorithms is still significantly lower than their fully supervised counterparts. In this paper, we propose a novel weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation. In this pipeline, we first obtain intermediate object localization and pixel labeling results for the training images, and then use such results to train task-specific deep networks in a fully supervised manner. The entire process consists of four stages, including object localization in the training images, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training. To obtain clean object instances in the training images, we propose a novel algorithm for filtering, fusing and classifying object instances collected from multiple solution mechanisms. In this algorithm, we incorporate both metric learning and density-based clustering to filter detected object instances. Experiments show that our weakly supervised pipeline achieves state-of-the-art results in multi-label image classification as well as weakly supervised object detection and very competitive results in weakly supervised semantic segmentation on MS-COCO, PASCAL VOC 2007 and PASCAL VOC 2012.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/icbase53849.2021.00068
An adaptive learning-based weakly supervised object detection via context awareness
  • Sep 1, 2021
  • Xiaoran Zeng + 2 more

Weakly supervised object detection (WSOD) methods have become a powerful tool without fully labeled bounding boxes. In the field of object detection, however, there is still a certain gap in performance between the existing WSOD technique and the fully supervised object detection method. There exist two problems that are not common in fully supervised object detection including object ambiguity and falling into the local optima. To solve the object ambiguity problem, multiple instance learning with the self-training method is proposed in this paper, from which pseudo labels will be generated and gradually replace the training labels to locate objects more accurately during the training process. For the problem of falling into the local optima, a context awareness block is added into our module, which makes our network pay more attention to the background and context of our region of interest (ROI) object. Experiment results based on the dataset PASCAL VOC2007 and VOC2012 are carried out to demonstrate the effectiveness of the proposed approach.

More from: Expert Systems with Applications
  • Research Article
  • 10.1016/j.eswa.2025.130209
CFF-KDNet: Cross-Scale Feature Fusion Network with Knowledge Distillation for Camouflaged Object Detection
  • Nov 1, 2025
  • Expert Systems with Applications
  • Bo Cai + 3 more

  • Research Article
  • 10.1016/j.eswa.2025.128597
Dual dynamic transformer for image captioning
  • Nov 1, 2025
  • Expert Systems with Applications
  • Chun Shan + 4 more

  • Research Article
  • 10.1016/j.eswa.2025.130282
Optimization of resilient humanitarian logistics using a robust combinatorial multi-attribute reverse auction
  • Nov 1, 2025
  • Expert Systems with Applications
  • Ali Aghasi + 2 more

  • Research Article
  • 10.1016/j.eswa.2025.128632
Reinforcement learning based early classification framework for power transformer differential protection
  • Nov 1, 2025
  • Expert Systems with Applications
  • Xiaopeng Wang + 4 more

  • Research Article
  • 10.1016/j.eswa.2025.128567
FWLMkNN: Efficient functional K-nearest neighbor based on clustering and functional data analysis
  • Nov 1, 2025
  • Expert Systems with Applications
  • Mohammed Sabri + 2 more

  • Research Article
  • 10.1016/j.eswa.2025.128578
Deep Inference Clustering Network with Information Maximization
  • Nov 1, 2025
  • Expert Systems with Applications
  • Wei Wang + 6 more

  • Research Article
  • 10.1016/j.eswa.2025.130239
GFMLLM: Enhance Multi-Modal Large Language Model for Global and Fine-grained Visual Spatial Perception
  • Nov 1, 2025
  • Expert Systems with Applications
  • Zhendong Fan + 7 more

  • Research Article
  • 10.1016/j.eswa.2025.130185
Reducing Diverse Sources of Noise in Ventricular Electrical Signals Using Variational Autoencoders
  • Nov 1, 2025
  • Expert Systems with Applications
  • Samuel Ruipérez-Campillo + 11 more

  • Research Article
  • 10.1016/j.eswa.2025.128616
AI-driven tactical recommendations for table tennis: decision optimization with probabilistic interaction model and technical quantification system
  • Nov 1, 2025
  • Expert Systems with Applications
  • Duo Na + 1 more

  • Research Article
  • 10.1016/j.eswa.2025.130285
Tunnel lining crack detection method based on deformable convolution and feature fusion with image enhancement of Retinex theory
  • Nov 1, 2025
  • Expert Systems with Applications
  • Hanshan Li

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon