Abstract

Recent advancements in the area of computer vision-based plant phenotyping are playing an important role, in determining the quantitative phenotypes of plants, and crop yield. Automatic segmentation of plants and its associated structures is the first and most important step in image based plant phenotyping. We design and implement convolutional neural network (CNN) based modified residual U-Net for semantic segmentation of plants from the background. We also use SegNet and U-Net architectures for comparison purpose. In this paper, residual U-Net, SegNet and U-Net models are tested on leaf segmentation challenge (LSC) dataset and fig dataset that are publicly available. LSC dataset consists of images of Arabidopsis and tobacco plants grown under controlled conditions whereas fig dataset includes top view images of fig plants captured in open-field conditions. We have used 8 evaluation metrics for analyzing and comparing the performance of residual U-Net, SegNet and U-Net architectures with the existing algorithms in the literature. Residual U-Net with 15.32 million trainable parameters, outperforms SegNet and other state-of-the-art methods whereas achieves comparable performance with respect to U-Net. Residual U-Net achieves dice coefficient of 0.9709 on LSC dataset and 0.9665 on fig dataset, respectively. The segmentation networks used in this paper can be used for other plant related applications such as plant trait estimation or in quantification of plant stress.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call