Abstract

Difficulties in the recognition of beet seedlings and weeds can arise from a complex background in the natural environment and a lack of light at night. In the current study, a novel depth fusion algorithm was proposed based on visible and near-infrared imagery. In particular, visible (RGB) and near-infrared images were superimposed at the pixel-level via a depth fusion algorithm and were subsequently fused into three-channel multi-modality images in order to characterize the edge details of beets and weeds. Moreover, an improved region-based fully convolutional network (R-FCN) model was applied in order to overcome the geometric modeling restriction of traditional convolutional kernels. More specifically, for the convolutional feature extraction layers, deformable convolution was adopted to replace the traditional convolutional kernel, allowing for the entire network to extract more precise features. In addition, online hard example mining was introduced to excavate the hard negative samples in the detection process for the retraining of misidentified samples. A total of four models were established via the aforementioned improved methods. Results demonstrate that the average precision of the improved optimal model for beets and weeds were 84.8% and 93.2%, respectively, while the mean average precision was improved to 89.0%. Compared with the classical R-FCN model, the performance of the optimal model was not only greatly improved, but the parameters were also not significantly expanded. Our study can provide a theoretical basis for the subsequent development of intelligent weed control robots under weak light conditions.

Highlights

  • MethodsIn order to investigate the detection and identification of beets and weeds in complex backgrounds, images of beets and weeds were collected at the University of Bonn, Germany, in 2016

  • Difficulties in the recognition of beet seedlings and weeds can arise from a complex background in the natural environment and a lack of light at night

  • Based on the classic R-FCN network, the visible and near-infrared images of beets and weeds were fused into three-channel multi-modality images at pixel-level using a deep fusion algorithm

Read more

Summary

Methods

In order to investigate the detection and identification of beets and weeds in complex backgrounds, images of beets and weeds were collected at the University of Bonn, Germany, in 2016. The images were collected via a multi-modality camera (JAI AD-130GE), equipped with two high-sensitivity CCD multispectral sensors of 1.3 million pixels. The camera can simultaneously collect visible (400nm~650nm) and near infrared (NIR) (760nm~1000nm), with an output image size of 1296 × 1296 pixels[17]. The dataset contains a total of 2,093 images of beets and weeds at different growth stages (Figure.[1]). In the process of data acquisition, beet seedlings and weeds with different levels of maturity under varying angle transformations were consider for the image acquisition. The same plant (beet and weed) was imaged multiple times under different ranges of overlap and occlusion between beets and weeds

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call