Abstract

AbstractDetecting tea shoots is the first and most crucial step in achieving intelligent tea harvesting. However, when faced with thousands of tea varieties, establishing a high‐quality and comprehensive database comes with significant costs. Therefore, it has become an urgent challenge to improve the model's generalization ability and train it with minimal samples to develop a model capable of achieving optimal detection performance in various environments and tea varieties. This paper introduces a model named You Only See Tea (YOST) which utilizes depth maps to enhance model's generalization ability. It is applied to detect tea shoots in complex environments and to perform cross‐variety tea shoots detection. Our approach differs from common data augmentation strategies aimed at enhancing model generalization by diversifying the data set. Instead, we enhance the model's learning capability by strategically amplifying its attention towards core target features while simultaneously reducing attention towards noncore features. The proposed module YOST is developed upon the You Only Look Once version 7 (YOLOv7) model, utilizing two shared‐weight backbone networks to process both RGB and depth images. Then further integrate two modalities with feature layers at the same scale into our designed Ultra‐attention Fusion and Activation Module. By utilizing this approach, the model can proficiently detect targets by capturing core features, even when encountering complex environments or unfamiliar tea leaf varieties. The experimental results indicate that YOST displayed faster and more consistent convergence compared with YOLOv7 in training. Additionally, YOST demonstrated a 6.58% enhancement in AP50 for detecting tea shoots in complex environments. Moreover, when faced with a cross‐variety tea shoots detection task involving multiple unfamiliar varieties, YOST showcased impressive generalization abilities, achieving a significant maximum AP50 improvement of 33.31% compared with YOLOv7. These findings establish its superior performance. Our research departs from the heavy reliance on high‐generalization models on a large number of training samples, making it easier to train small‐scale, high‐generalization models. This approach significantly alleviates the pressure associated with data collection and model training.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.