YOLO-CESn: an efficient field weed detection model based on an improved YOLO11

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract Weeds in farmland severely reduce crop yield and quality. However, traditional detection models struggle with weed diversity, blurred boundaries, limited multi-scale feature extraction, and occlusions. To address these challenges, this study proposes an improved detection model, YOLO-CESn, based on You Only Look Once 11 n (YOLO11n). First, deformable convolution v4 (DCNv4) and ghost modules are integrated within a cross stage partial with kernel size 2 (C3k2) structure in the backbone to enhance geometric feature extraction and reduce false positives caused by diverse weed morphologies and unclear boundaries. Second, an efficient multi-scale fusion neck is designed by combining low-level and high-level features with a tiny object detection head, thereby improving recognition of early-stage weeds and achieving full coverage across growth stages. Finally, a soft non-maximum suppression post-processing mechanism is adopted to decay confidence scores of overlapping bounding boxes instead of applying hard suppression, thus alleviating missed detections under dense distributions and occlusion. Experimental results show that for the Fine24 dataset, YOLO-CESn achieves 74.0% mAP@0.5 and 52.1% mAP@0.5:0.95, representing improvements of 4.5% and 3.7% over YOLO11n, respectively. For the CottonWeedDet12 dataset, the model attains 94.0% mAP@0.5 and 88.4% mAP@0.5:0.95, with corresponding increases of 1.8% and 1.6%. With only 7.4M parameters and 120 FPS inference, YOLO-CESn provides a lightweight and effective solution for weed detection in precision agriculture.

Similar Papers
  • Conference Article
  • Cite Count Icon 5
  • 10.1117/12.2664131
OpenWeedGUI: an open-source graphical user interface for weed imaging and detection
  • Jun 13, 2023
  • Jiajun Xu + 1 more

Graphical user interfaces (GUIs) interacting with hardware, data, and models, are beneficial for accelerating the deployment and adoption of machine vision technology in precision agriculture. They are particularly important for end users without the necessary technical expertise in computer programming. Making GUIs open-source and public is further beneficial by enabling community efforts into rapid iterations of prototyping and testing. Weed detection is important for the realization of machine vision-based precision weeding, thereby protecting crops, reducing resource inputs, and managing herbicide resistance. Considerable research has been done on weed imaging and deep learning (DL) modeling for weed detection, but there are few GUI tools publicly available for image collection, visualization, model deployment, and evaluation. This study is therefore to present a simple open-source easy-to-use GUI, i.e., OpenWeedGUI, for weed imaging and DL-based weed detection, with the goal to bridge the gap between machine vision technology and users. The GUI was developed in Python with the aid of three major open-source libraries including PyQt, Vimba (for camera interfacing), and OpenCV, for image collection, transformation, weed detection, and visualization. It is featured with a window for live display of weed images and detection results highlighted with bounding boxes, and supports flexible user control of imaging settings (e.g., exposure time, resolution, frame rate, etc.) and the deployment of a large suite of trained YOLO object detection models for real-time weed detection and allows users to save images and weed detection results in a local directory on demand. The OpenWeedGUI was tested on a mobile machine vision platform for weed imaging and detection. This GUI can be adapted for other machine vision tasks in precision agriculture.

  • Research Article
  • 10.1071/cp24243
Deep learning-based object detection model for location and recognition of weeds in cereal fields using colour imagery
  • Apr 10, 2025
  • Crop & Pasture Science
  • Hossein Akhtari + 3 more

Context Automatic weed detection and control is crucial in precision agriculture, especially in cereal fields where overlapping crops and narrow row spacing present significant challenges. This research prioritized small weed detection and its performance in dense images by using innovative techniques. Aims This study investigated two recent convolutional neural networks (CNNs) with different architectures and detection models for weed detection in cereal fields. The feature pyramid network (FPN) technique was applied to improve performance. To tackle challenges such as high weed density and occlusion, a method of dividing images into smaller parts with pixel area thresholds was implemented, achieving an approximately 22% increase in average precision (AP). Methods The dataset includes red–green–blue (RGB) images of cereal fields captured in Germany (2018–2019) at varying growth stages. Images were annotated using ‘LabelImg’, assigning weed labels. Models were evaluated by precision, recall, prediction time, and detection rate. Key results The evaluation results showed that the FasterRCNN-ResNet50 with FPN had the best performance in terms of detection numbers. In the tests, the model successfully detected 508 of 535 annotated weeds in 36 images, achieving a detection rate of 94.95%, with a 95% confidence interval of [92.76%, 96.51%]. Additionally, a method was proposed to boost average precision and recall in high-density weed images, enhancing detection operations. Conclusions The results of this research showed that the presented algorithms and methods have a high ability to solve above-mentioned challenges. Implications This research evaluated deep learning models, and recommends the best and stresses reliable weed identification at all growth stages.

  • Research Article
  • Cite Count Icon 1
  • 10.1002/ps.8554
Semantic segmentation for weed detection in corn.
  • Nov 25, 2024
  • Pest management science
  • Teng Liu + 7 more

Reliable, fast, and accurate weed detection in farmland is crucial for precision weed management but remains challenging due to the diverse weed species present across different fields. While deep learning models for direct weed detection have been developed in previous studies, creating a training dataset that encompasses all possible weed species, ecotypes, and growth stages is practically unfeasible. This study proposes a novel approach to detect weeds by integrating semantic segmentation with image processing. The primary aim is to simplify the weed detection process by segmenting crop pixels and identifying all vegetation outside the crop mask as weeds. The proposed method employs a semantic segmentation model to generate a mask of corn (Zea mays L.) crops, identifying all green plant pixels outside the mask as weeds. This indirect segmentation approach reduces model complexity by avoiding the need for direct detection of diverse weed species. To enhance real-time performance, the semantic segmentation model was optimized through knowledge distillation, resulting in a faster, lighter-weight inference. Experimental results demonstrated that the DeepLabV3+ model, after applying knowledge distillation, achieved an average accuracy (aAcc) exceeding 99.5% and a mean intersection over union (mIoU) across all categories above 95.5%. Furthermore, the model's operating speed surpassed 34 frames per second (FPS). This study introduces a novel method that accurately segments crop pixels to form a mask, identifying vegetation outside this mask as weeds. By focusing on crop segmentation, the method avoids the complexity associated with diverse weed species, varying densities, and different growth stages. This approach offers a practical and efficient solution to facilitate the training of effective computer vision models for precision weed detection and control. © 2024 Society of Chemical Industry.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.3390/electronics13091699
OpenWeedGUI: An Open-Source Graphical Tool for Weed Imaging and YOLO-Based Weed Detection
  • Apr 27, 2024
  • Electronics
  • Jiajun Xu + 2 more

Weed management impacts crop yield and quality. Machine vision technology is crucial to the realization of site-specific precision weeding for sustainable crop production. Progress has been made in developing computer vision algorithms, machine learning models, and datasets for weed recognition, but there has been a lack of open-source, publicly available software tools that link imaging hardware and offline trained models for system prototyping and evaluation, hindering community-wise development efforts. Graphical user interfaces (GUIs) are among such tools that can integrate hardware, data, and models to accelerate the deployment and adoption of machine vision-based weeding technology. This study introduces a novel GUI called OpenWeedGUI, designed for the ease of acquiring images and deploying YOLO (You Only Look Once) models for real-time weed detection, bridging the gap between machine vision and artificial intelligence (AI) technologies and users. The GUI was created in the framework of PyQt with the aid of open-source libraries for image collection, transformation, weed detection, and visualization. It consists of various functional modules for flexible user controls and a live display window for visualizing weed imagery and detection. Notably, it supports the deployment of a large suite of 31 different YOLO weed detection models, providing flexibility in model selection. Extensive indoor and field tests demonstrated the competencies of the developed software program. The OpenWeedGUI is expected to be a useful tool for promoting community efforts to advance precision weeding technology.

  • Conference Article
  • Cite Count Icon 468
  • 10.1109/cvpr.2016.78
Deep Saliency with Encoded Low Level Distance Map and High Level Features
  • Jun 1, 2016
  • Gayoung Lee + 2 more

Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene. These advances have demonstrated superior results over previous works that utilize hand-crafted low level features for saliency detection. In this paper, we demonstrate that hand-crafted features can provide complementary information to enhance performance of saliency detection that utilizes only high level features. Our method utilizes both high level and low level features for saliency detection under a unified deep learning framework. The high level features are extracted using the VGG-net, and the low level features are compared with other parts of an image to form a low level distance map. The low level distance map is then encoded using a convolutional neural network(CNN) with multiple 1X1 convolutional and ReLU layers. We concatenate the encoded low level distance map and the high level features, and connect them to a fully connected neural network classifier to evaluate the saliency of a query region. Our experiments show that our method can further improve the performance of state-of-the-art deep learning-based saliency detection methods.

  • Research Article
  • Cite Count Icon 157
  • 10.1016/j.compag.2023.107655
YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems
  • Jan 20, 2023
  • Computers and Electronics in Agriculture
  • Fengying Dang + 3 more

YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems

  • Research Article
  • Cite Count Icon 490
  • 10.1016/j.compag.2019.02.005
A review on weed detection using ground-based machine vision and image processing techniques
  • Feb 14, 2019
  • Computers and Electronics in Agriculture
  • Aichen Wang + 2 more

A review on weed detection using ground-based machine vision and image processing techniques

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 142
  • 10.3390/rs15020539
Deep Object Detection of Crop Weeds: Performance of YOLOv7 on a Real Case Dataset from UAV Images
  • Jan 16, 2023
  • Remote Sensing
  • Ignazio Gallo + 5 more

Weeds are a crucial threat to agriculture, and in order to preserve crop productivity, spreading agrochemicals is a common practice with a potential negative impact on the environment. Methods that can support intelligent application are needed. Therefore, identification and mapping is a critical step in performing site-specific weed management. Unmanned aerial vehicle (UAV) data streams are considered the best for weed detection due to the high resolution and flexibility of data acquisition and the spatial explicit dimensions of imagery. However, with the existence of unstructured crop conditions and the high biological variation of weeds, it remains a difficult challenge to generate accurate weed recognition and detection models. Two critical barriers to tackling this challenge are related to (1) a lack of case-specific, large, and comprehensive weed UAV image datasets for the crop of interest, (2) defining the most appropriate computer vision (CV) weed detection models to assess the operationality of detection approaches in real case conditions. Deep Learning (DL) algorithms, appropriately trained to deal with the real case complexity of UAV data in agriculture, can provide valid alternative solutions with respect to standard CV approaches for an accurate weed recognition model. In this framework, this paper first introduces a new weed and crop dataset named Chicory Plant (CP) and then tests state-of-the-art DL algorithms for object detection. A total of 12,113 bounding box annotations were generated to identify weed targets (Mercurialis annua) from more than 3000 RGB images of chicory plantations, collected using a UAV system at various stages of crop and weed growth. Deep weed object detection was conducted by testing the most recent You Only Look Once version 7 (YOLOv7) on both the CP and publicly available datasets (Lincoln beet (LB)), for which a previous version of YOLO was used to map weeds and crops. The YOLOv7 results obtained for the CP dataset were encouraging, outperforming the other YOLO variants by producing value metrics of 56.6%, 62.1%, and 61.3% for the mAP@0.5 scores, recall, and precision, respectively. Furthermore, the YOLOv7 model applied to the LB dataset surpassed the existing published results by increasing the mAP@0.5 scores from 51% to 61%, 67.5% to 74.1%, and 34.6% to 48% for the total mAP, mAP for weeds, and mAP for sugar beets, respectively. This study illustrates the potential of the YOLOv7 model for weed detection but remarks on the fundamental needs of large-scale, annotated weed datasets to develop and evaluate models in real-case field circumstances.

  • Research Article
  • 10.3390/agriculture15080807
MKD8: An Enhanced YOLOv8 Model for High-Precision Weed Detection
  • Apr 8, 2025
  • Agriculture
  • Wenxuan Su + 4 more

Weeds are an inevitable element in agricultural production, and their significant negative impacts on crop growth make weed detection a crucial task in precision agriculture. The diversity of weed species and the substantial background noise in weed images pose considerable challenges for weed detection. To address these challenges, constructing a high-quality dataset and designing an effective artificial intelligence model are essential solutions. We captured 2002 images containing 10 types of weeds from cotton and corn fields, establishing the CornCottonWeed dataset, which provides rich data support for weed-detection tasks. Based on this dataset, we developed the MKD8 model for weed detection. To enhance the model’s feature extraction capabilities, we designed the CVM and CKN modules, which effectively alleviate the issues of deep-feature information loss and the difficulty in capturing fine-grained features, enabling the model to more accurately distinguish between different weed species. To suppress the interference of background noise, we designed the ASDW module, which combines dynamic convolution and attention mechanisms to further improve the model’s ability to differentiate and detect weeds. Experimental results show that the MKD8 model achieved mAP50 and mAP[50:95] of 88.6% and 78.4%, respectively, on the CornCottonWeed dataset, representing improvements of 9.9% and 8.5% over the baseline model. On the public weed dataset CottoWeedDet12, the mAP50 and mAP[50:95] reached 95.3% and 90.5%, respectively, representing improvements of 1.0% and 1.4% over the baseline model.

  • Research Article
  • 10.3390/su17136150
Lightweight YOLOv8-Based Model for Weed Detection in Dryland Spring Wheat Fields
  • Jul 4, 2025
  • Sustainability
  • Zhengyuan Qi + 3 more

Efficient weed detection in dryland spring wheat fields is crucial for sustainable agriculture, as it enables targeted interventions that reduce herbicide use, minimize environmental impact, and optimize resource allocation in water-limited farming systems. This paper presents HSG-Net, a novel lightweight object detection model based on YOLOv8 for weed identification in dryland spring wheat fields. The proposed architecture integrates three key innovations: an HGNetv2 backbone for efficient feature extraction, C2f-S modules with star-shaped attention mechanisms for enhanced feature representation, and Group Head detection heads for parameter-efficient prediction. Experiments on a dataset of eight common weed species in dryland spring wheat fields show that HSG-Net improves detection accuracy while cutting computational costs, outperforming modern deep learning approaches. The model effectively addresses the unique challenges of weed detection in dryland agriculture, including visual similarity between crops and weeds, variable illumination conditions, and complex backgrounds. Ablation studies confirm the complementary contributions of each architectural component, with the full HSG-Net model achieving an optimal balance between accuracy and resource efficiency. The lightweight nature of HSG-Net makes it particularly suitable for deployment on resource-constrained devices used in precision agriculture, enabling real-time weed detection and targeted intervention in field conditions. This work represents an important advancement in developing practical deep learning solutions for sustainable weed management in dryland farming systems.

  • Research Article
  • 10.3389/fpls.2025.1556275
PHRF-RTDETR: a lightweight weed detection method for upland rice based on RT-DETR
  • Jun 24, 2025
  • Frontiers in Plant Science
  • Xianjin Jin + 7 more

IntroductionWeed poses a greater threat to rice yield and quality in upland environments compared to paddy fields. Effective weed detection is a critical prerequisite for intelligent weed control technologies. However, the current weed detection methods for upland rice often struggle to achieve a balance between accuracy and lightweight design, significantly hindering the practical application and widespread adoption of intelligent weeding technologies in real-world agricultural scenarios. To address this issue, we enhanced the baseline model RT-DETR and proposed a lightweight weed detection model for upland rice, named PHRF-RTDETR.MethodsFirst, we propose a novel lightweight backbone network, termed PGRNet, to replace the original computationally intensive feature extraction network in RT-DETR. Second, we integrate HiLo, a mechanism excluding parameter growth, into the AIFI module to enhance the model’s capability of capturing multi-frequency features. Furthermore, the RepC3 block is optimized by incorporating the RetBlock structure, resulting in RetC3, which effectively balances feature fusion and computational efficiency. Finally, the conventional GIoU loss is replaced with the Focaler-WIoUv3 loss function to significantly improve the model’s generalization performance.ResultsThe experimental results show that PHRF-RTDETR achieves precision, recall, mAP50, and mAP50:95 scores of 92%, 85.6%, 88.2%, and 76.6%, respectively, with all metrics deviating by less than 1.7 percentage points from the baseline model in upland rice weed detection. In terms of lightweight indicators, PHRF-RTDETR achieved reductions in floating-point operations, parameter count, and model size by 59.3%, 53.7%, and 53.9%, respectively, compared to the baseline model. Compared with the traditional target detection models of Faster R-CNN and SSD, YOLO series models, and RT-DETR series models, the PHRF-RTDETR model effectively balances lightweight and accuracy performance for weed detection in upland rice.DiscussionOverall, the PHRF-RTDETR model demonstrates potential for implementation in the detection modules of intelligent weeding robots for upland rice systems, offering dual benefits of reducing agricultural production costs through labor efficiency and contributing to improved food security in drought-prone regions.

  • Research Article
  • 10.22060/eej.2018.13787.5189
A Saliency Detection Model via Fusing Extracted Low-level and High-level Features from an Image
  • Dec 3, 2018
  • S Asadi Amiri + 1 more

Saliency regions attract more human’s attention than other regions in an image. Low- level and high-level features are utilized in saliency region detection. Low-level features contain primitive information such as color or texture while high-level features usually consider visual systems. Recently, some salient region detection methods have been proposed based on only low-level features or high-level features. It is necessary to consider both low-level features and high-level features to overcome their limitations. In this paper, a novel saliency detection method is proposed which uses both low-level and high-level features. Color difference and texture difference are considered as low-level features, while modeling human’s attention to the center of the image is considered as a high-level feature. In this approach, color saliency maps are extracted from each channel in Lab color space; and texture saliency maps are extracted using wavelet transform and local variance of each channel. Finally, these feature maps are fused to construct the final saliency map. In the post processing step, morphological operators and the connected components technique are applied on the final saliency map to construct further contiguous saliency regions. We have compared our proposed method with four state-of-the-art methods on the MSRA (Microsoft Security Response Alliance) database. The averaged F-measure over 1000 images of the MSRA dataset is achieved 0.7824. Experimental results demonstrate that the proposed method outperforms the existing methods in saliency region detection.

  • Research Article
  • Cite Count Icon 2
  • 10.13031/ja.15413
Aerial-Based Weed Detection Using Low-Cost and Lightweight Deep Learning Models on an Edge Platform
  • Jan 1, 2023
  • Journal of the ASABE
  • Nitin Rai + 4 more

Highlights Lightweight deep learning models were trained on an edge device to identify weeds in aerial images. A customized configuration file was setup to train the models. These models were deployed to detect weeds in aerial images and videos (near real-time). CSPMobileNet-v2 and YOLOv4-lite are recommended models for weed detection using edge platform. Abstract. Deep learning (DL) techniques have proven to be a successful approach in detecting weeds for site-specific weed management (SSWM). In the past, most of the research work has trained and deployed pre-trained DL models on high-end systems coupled with expensive graphical processing units (GPUs). However, only a limited number of research studies have used DL models on an edge system for aerial-based weed detection. Therefore, while focusing on hardware cost minimization, eight DL models were trained and deployed on an edge device to detect weeds in aerial-image context and videos in this study. Four large models, namely CSPDarkNet-53, DarkNet-53, DenseNet-201, and ResNet-50, along with four lightweight models, CSPMobileNet-v2, YOLOv4-lite, EfficientNet-B0, and DarkNet-Ref, were considered for training a customized DL architecture. Along with trained model performance scores (average precision score, mean average precision (mAP), intersection over union, precision, and recall), other model metrics to assess edge system performance such as billion floating-point operations/s (BFLOPS), frame rates/s (FPS), and GPU memory usage were also estimated. The lightweight CSPMobileNet-v2 and YOLOv4-lite models outperformed others in detecting weeds in aerial image context. These models were able to achieve a mAP score of 83.2% and 82.2%, delivering an FPS of 60.9 and 61.1 during near real-time weed detection in aerial videos, respectively. The popular ResNet-50 model achieved a mAP of 79.6%, which was the highest amongst all the large models deployed for weed detection tasks. Based on the results, the two lightweight models, namely, CSPMobileNet-v2 and YOLOv4-lite, are recommended, and they can be used on a low-cost edge system to detect weeds in aerial image context with significant accuracy. Keywords: Aerial image, Deep learning, Edge device, Precision agriculture, Weed detection.

  • Conference Article
  • Cite Count Icon 13
  • 10.13031/aim.202200214
DeepCottonWeeds (DCW): A Novel Benchmark of YOLO Object Detectors for Weed Detection in Cotton Production Systems
  • Jan 1, 2022
  • Fengying Dang + 4 more

<b><sc>Abstract.</sc></b> Weeds are among the major threats to cotton production. Overreliance on herbicides for weed control has accelerated the evolution of herbicide-resistance in weeds and brought increasing concerns about environments, food safety and human health. Machine vision systems for automated or robotic weeding have received significant interest in integrated, sustainable weed management. However, in the presence of unstructured field conditions and significant biological variability of weeds, it still remains a challenging task to develop robust weed identification and detection systems. Two critical obstacles to addressing this challenge include the lack of dedicated, large-scale image datasets of weeds specific to crop (cotton) production and the development of machine learning models for weed detection. This study presents a new dataset of weeds important to the U.S. cotton production systems, consisting of 5648 images of 12 weed classes with a total of 9370 bounding box annotation, collected under natural light conditions and at varied weed growth stages in cotton fields. Furthermore, a comprehensive benchmark of 18 selected state-of-the-art YOLO object detectors, involving YOLOv3, YOLOv4, Scaled-YOLOv, and YOLOv5, was established for weed detection on the dataset. The detection accuracy in terms of mAP@50 ranged from 88.14% by YOLOv3-tiny to 95.22% by YOLOv4, and the accuracy in terms of mAP@50[0.5:0.95] ranged from 68.18% by YOLOv3-tiny to 89.72 by Scaled-YOLOv4. All the YOLO models especially YOLOv5n and YOLOv5s showed great potential for real-time weed detection, and data augmentation could increase weed detection accuracy. Both the codes and weed dataset for model benchmarking are publicly available (<underline>https://github.com/Derekabc/DCW</underline>), which are expected to be valuable resources for future research in the field of weed detection and beyond.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 11
  • 10.3390/app122412828
Modified Barnacles Mating Optimization with Deep Learning Based Weed Detection Model for Smart Agriculture
  • Dec 14, 2022
  • Applied Sciences
  • Amani Abdulrahman Albraikan + 5 more

Weed control is a significant means to enhance crop production. Weeds are accountable for 45% of the agriculture sector’s crop losses, which primarily occur because of competition with crops. Accurate and rapid weed detection in agricultural fields was a difficult task because of the presence of a wide range of weed species at various densities and growth phases. Presently, several smart agriculture tasks, such as weed detection, plant disease detection, species identification, water and soil conservation, and crop yield prediction, can be realized by using technology. In this article, we propose a Modified Barnacles Mating Optimization with Deep Learning based weed detection (MBMODL-WD) technique. The MBMODL-WD technique aims to automatically identify the weeds in the agricultural field. Primarily, the presented MBMODL-WD technique uses the Gabor filtering (GF) technique for the noise removal process. For automated weed detection, the presented MBMODL-WD technique uses the DenseNet-121 model as feature extraction with the MBMO algorithm as hyperparameter optimization. The design of the MBMO algorithm involves the integration of self-population-based initialization with the standard BMO algorithm. At last, the Elman Neural Network (ENN) method was applied for the weed classification process. To demonstrate the enhanced performance of the MBMODL-WD approach, a series of simulation analyses were performed. A comprehensive set of simulations highlighted the enhanced performance of the presented MBMODL-WD methodology over other DL models with a maximum accuracy of 98.99%.

More from: Measurement Science and Technology
  • New
  • Research Article
  • 10.1088/1361-6501/ae18ee
Characterization of shear-flow behaviors of rock fractures using a newly-developed shear-flow apparatus
  • Nov 7, 2025
  • Measurement Science and Technology
  • Rihua Jiang + 5 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae1cd8
Improved Structural Sparse Representation and Quantization Constraint Prior-based Medical Image Compression Artifact Reduction: A Grey Wolf Optimization-based Approach
  • Nov 7, 2025
  • Measurement Science and Technology
  • Susmita Bhattacharyya + 2 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae10d3
Enhancing defect detection with diffusion model
  • Nov 7, 2025
  • Measurement Science and Technology
  • Ziming Song + 3 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae1858
IGD-YOLOv8s: insulator defect detection via iterative attention and generalized dynamic feature pyramids
  • Nov 7, 2025
  • Measurement Science and Technology
  • Zhiqin Zhang + 5 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae10ca
Research on heat conduction performance evaluation of thermal grease based on physical constraints of 3D CNN
  • Nov 6, 2025
  • Measurement Science and Technology
  • Weihua Cao + 1 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae10cc
Design and application of a passive RFID tag-based temperature measurement system with a communication range of 100 m
  • Nov 6, 2025
  • Measurement Science and Technology
  • Qishun Li + 6 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae1c62
Position Estimation Enhancement and Robust Resonant Frequency Tracking Control Strategy in Linear Oscillating Machines
  • Nov 6, 2025
  • Measurement Science and Technology
  • Yuqiu Zhang + 2 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae1c5c
An uneven maximum classifier discrepancy rolling bearing transfer fault diagnosis method combining GCN and KAN
  • Nov 6, 2025
  • Measurement Science and Technology
  • Chenhui Qian + 5 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae1c61
A Dual-Channel Time-Frequency Meta-Learning Approach for Bearing Fault Diagnostics Under Variable Operating Conditions
  • Nov 6, 2025
  • Measurement Science and Technology
  • Songcheng Song Wang + 5 more

  • New
  • Research Article
  • 10.1088/1361-6501/ae162a
Spherical-constraint-based probe tip calibration for light pen vision measurement systems
  • Nov 6, 2025
  • Measurement Science and Technology
  • Xu Zhang + 4 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon