Abstract

Fast and accurate segmentation of winter wheat ears from canopy images can significantly promote the field phenotyping of ears by improving efficiency. In this study, a semantic segmentation based method, i.e., EarSegNet, was proposed to perform pixel-wise classification to segment wheat ears from canopy images captured in field conditions. The EarSegNet integrated the encoder-decoder structure and dilated convolution, aiming to further improve the segmentation accuracy and efficiency for the ears of winter wheat. The results showed that the proposed EarSegNet was able to achieve accurate segmentation of wheat ears from canopy images captured at the flowering stage (segmentation quality = 0.7743, F1 score = 87.25%, structural similarity index = 0.8773). In order to validate the proposed method, the performance of the proposed EarSegNet was compared to the widely used segmentation methods, i.e., SegNet, Two-stage method, and Panicle-SEG. Results showed that the proposed EarSegNet outperformed the compared methods, making a robust and efficient tool to segment ears of winter wheat from canopy images captured at the flowering stage. Generalization tests showed that the proposed EarSegNet achieved superior performances to the compared method, suggesting that the EarSegNet had great potentials for field applications. Obtained results showed that the depths of the encoder, i.e., VGG16, had no significant influences on the performance of EarSegNet, however, deepening the VGG16 would improve the performance of the EarSegNet on the evaluation metric of recall. The results showed that the EarSegNet was a promising tool for ears of winter wheat at flowering stage.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call