Abstract
Weed maps should be available quickly, reliably, and with high detail to be useful for site-specific management in crop protection and to promote more sustainable agriculture by reducing pesticide use. Here, the optimization of a deep residual convolutional neural network (ResNet-18) for the classification of weed and crop plants in UAV imagery is proposed. The target was to reach sufficient performance on an embedded system by maintaining the same features of the ResNet-18 model as a basis for fast UAV mapping. This would enable online recognition and subsequent mapping of weeds during UAV flying operation. Optimization was achieved mainly by avoiding redundant computations that arise when a classification model is applied on overlapping tiles in a larger input image. The model was trained and tested with imagery obtained from a UAV flight campaign at low altitude over a winter wheat field, and classification was performed on species level with the weed species Matricaria chamomilla L., Papaver rhoeas L., Veronica hederifolia L., and Viola arvensis ssp. arvensis observed in that field. The ResNet-18 model with the optimized image-level prediction pipeline reached a performance of 2.2 frames per second with an NVIDIA Jetson AGX Xavier on the full resolution UAV image, which would amount to about 1.78 ha h−1 area output for continuous field mapping. The overall accuracy for determining crop, soil, and weed species was 94%. There were some limitations in the detection of species unknown to the model. When shifting from 16-bit to 32-bit model precision, no improvement in classification accuracy was observed, but a strong decline in speed performance, especially when a higher number of filters was used in the ResNet-18 model. Future work should be directed towards the integration of the mapping process on UAV platforms, guiding UAVs autonomously for mapping purpose, and ensuring the transferability of the models to other crop fields.
Highlights
Artificial intelligence renovates the extraction of information from very-highresolution remote sensing data (VHR) with neural networks established in deep learning architectures tailored for the needs of image data
This study aims for optimizing a deep convolutional neural networks (DCNN) for weed identification with embedded systems for unmanned aerial vehicle (UAV) imagery
The training of the residual networks (ResNets)-18 model with the 201 × 201 px image patches from the training set reached a fast convergence after about 60 epochs, as can be seen from the trend training set reached a fast convergence after about 60 epochs, as can be seen from the trend discovered by the accuracy andand loss curves in in
Summary
Artificial intelligence renovates the extraction of information from very-highresolution remote sensing data (VHR) with neural networks established in deep learning architectures tailored for the needs of image data. Applied to the right scenario, this might pave the way for a more sustainable agriculture [1]. One such application would be site-specific weed management (SSWM). Pesticides are supplied with dosage instructions that are calculated uniformly on a “per hectare” basis for the entire field. The target is in this case the area within the field
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.