Abstract

In our study, a real-world application using the latest unmanned aerial vehicle (UAV) functionality is presented. Sugar beet is an important industrial crop in many countries. According to estimates, weeds in sugar beet fields dramatically reduce both the quantity and quality of sugar beet crops. Due to the spectral similarities between weeds and sugar beet seedlings, visual identification of weeds is extremely difficult in the sugar beet fields. In the present study, a lightweight, end-to-end trainable guided feature-based deep learning method, called DeepMultiFuse, has been developed to improve the weed segmentation performance using multispectral UAV images that aim to fulfill these requirements (to identify weed in sugar beet fields). The proposed architecture is composed of five basic concepts, including guided features, fusion module, dilation convolution operation, modified inception module, and gated encoder–decoder network extracting the object-level image representation for different scenes. The proposed network was trained on the generated dataset, including four multispectral orthomosaic reflectance maps using the RedEdge-M sensor in Rheinbach and three multispectral orthomosaic reflectance maps applying Sequoia sensor in Eschikon for mapping weed segmentation on the field. Experimental results demonstrated that the proposed network taking advantage of the feature fusion module-rich features outperforms state-of-the-art networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call