Abstract

In precision farming, weed detection is required for precise weedicide application, and the detection of tobacco crops is necessary for pesticide application on tobacco leaves. Automated accurate detection of tobacco and weeds through aerial visual cues holds promise. Precise weed detection in crop field imagery can be treated as a semantic segmentation problem. Many image processing, classical machine learning, and deep learning-based approaches have been devised in the past, out of which deep learning-based techniques promise better accuracies for semantic segmentation, i.e., pixel-level classification. We present a new method that improves the precision of pixel-level inter-class classification of the crop and the weed pixels. The technique applies semantic segmentation in two stages. In stage I, a binary pixel-level classifier is developed to segment background and vegetation. In stage II, a three-class pixel-level classifier is designed to classify background, weeds, and tobacco. The output of the first stage is the input of the second stage. To test our designed classifier, a new tobacco crop aerial dataset was captured and manually labeled pixel-wise. The two-stage semantic segmentation architecture has shown better tobacco and weeds pixel-level classification precision. The intersection over union (IOU) for the tobacco crop was improved from 0.67 to 0.85, and IOU for weeds enhanced from 0.76 to 0.91 with the new approach compared to the traditional one-stage semantic segmentation application. We observe that in stage I shallower, a smaller semantic segmentation model is enough compared to stage II, where a segmentation network with more neurons serves the purpose of good detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.