Walnut detection in mountainous and hilly regions often faces significant challenges due to obstructions, which adversely affect model performance. To address this issue, we collected a dataset comprising 2379 walnut images from these regions, with detailed annotations for both obstructed and non-obstructed walnuts. Based on this dataset, we propose OW-YOLO, a lightweight object detection model specifically designed for detecting small, obstructed walnuts. The model’s backbone was restructured with the integration of the DWR-DRB (Dilated Weighted Residual-Dilated Residual Block) module. To enhance efficiency and multi-scale feature fusion, we incorporated the HSFPN (High-Level Screening Feature Pyramid Network) and redesigned the detection head by replacing the original head with the more efficient LADH detection head while removing the head processing 32 × 32 feature maps. These improvements effectively reduced model complexity and significantly enhanced detection accuracy for obstructed walnuts. Experiments were conducted using the PyTorch framework on an NVIDIA GeForce RTX 4060 Ti GPU. The results demonstrate that OW-YOLO outperforms other models, achieving an mAP@0.5 (mean average precision) of 83.6%, mAP@[0.5:0.95] of 53.7%, and an F1 score of 77.9%. Additionally, the model’s parameter count decreased by 49.2%, weight file size was reduced by 48.1%, and computational load dropped by 37.3%, effectively mitigating the impact of obstruction on detection accuracy. These findings provide robust support for the future development of walnut agriculture and lay a solid foundation for the broader adoption of intelligent agriculture.
Read full abstract