Abstract

• Identification and segmentation of weed species is of great importance. • Semantic segmentation on natural images requires enormous manual annotation. • The proposed architecture increases the performance in distinguishing between weed species. • The proposed methodology reduces the annotation effort to one third. • The use of synthetic datasets doubles the segmentation performance. Weeds compete with productive crops for soil, nutrients and sunlight and are therefore a major contributor to crop yield loss, which is why safer and more effective herbicide products are continually being developed. Digital evaluation tools to automate and homogenize field measurements are of vital importance to accelerate their development. However, the development of these tools requires the generation of semantic segmentation datasets, which is a complex, time-consuming and not easily affordable task. In this paper, we present a deep learning segmentation model that is able to distinguish between different plant species at the pixel level. First, we have generated three extensive datasets targeting one crop species ( Zea mays ), three grass species ( Setaria verticillata, Digitaria sanguinalis, Echinochloa crus-galli ) and three broadleaf species ( Abutilon theophrasti, Chenopodium albums, Amaranthus retroflexus ). The first dataset consists of real field images that were manually annotated. The second dataset is composed of images of plots where only one species is present at a time and the third type of dataset was synthetically generated from images of individual plants mimicking the distribution of real field images. Second, we have proposed a semantic segmentation architecture by extending a PSPNet architecture with an auxiliary classification loss to aid model convergence. Our results show that the network performance increases when supplementing the real field image dataset with the other types of datasets without increasing the manual annotation effort. More specifically, the use of the real field dataset obtains a Dice-Søensen Coefficient (DSC) score of 25.32 . This performance increases when this dataset is combined with the single-species class dataset (DSC= 47.97 ) or the synthetic dataset (DSC= 45.20 ). As for the proposed model, the ablation method shows that by removing the proposed auxiliary classification loss, the segmentation performance decreases (DSC= 45.96 ) compared to the proposed architecture method (DSC= 47.97 ). The proposed method shows better performance than the current state of the art. In addition, the use of proposed single-species or synthetic datasets can double the performance of the algorithm than when using real datasets without additional manual annotation effort.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call