Abstract

Research is being extensively conducted on using deep learning in the field of crop and weed segmentation based on images captured with a camera. However, the segmentation performance for various crops and weeds varies significantly, implying that certain classes of crops or weeds are not being detected properly. This problem may also occur in the loss calculations used in crop and weed segmentation. In previous studies, the cross-entropy loss (corresponding to a distribution loss) and dice loss (using spatial information) have been widely used. However, such losses lead to large discrepancies in crop and weed segmentation performance, as the correlations between crop and weed classes are not considered. In order to solve these problems, this study proposes multi-task semantic segmentation-convolutional neural network for detecting crops and weeds (MTS-CNN) using one-stage training. This approach adds the crop, weed, and both (crop and weed) losses to heighten the correlations between the crop and weed classes, and designs the model so that the object (crop and weed) region is trained intensively. In experiments conducted using three types of open databases - the BoniRob dataset, a crop/weed field image dataset (CWFID), and rice seedling and weed dataset - the mean intersection of union (MIOU) values of the segmentation for the crops and weeds in the MTS-CNN are 0.9164, 0.8372, and 0.8260, respectively. Thus, the results indicate higher accuracy from the proposed approach than from the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call