Abstract

Leaf segmentation learns more about leaf-level traits such as leaf area, count, stress, and development phases. In plant phenotyping, segmentation and counting of plant organs like leaves are a major challenge due to considerable overlap between leaves and varying environmental conditions, including brightness variation and shadow, blur due to wind. Further, the plant's inherent challenges, such as the leaf texture, genotype, size, shape, and density variation of leaves, make the leaf segmentation task more complex. To meet these challenges, the present work proposes a novel method for leaf segmentation and counting employing Eff-Unet++, an encoder-decoder-based architecture. This architecture uses EfficientNet-B4 as an encoder for accurate feature extraction. The redesigned skip connections and residual block in the decoder utilize encoder output and help to address the information degradation problem. In addition, the redesigned skip connections reduce the computational complexity. The lateral output layer is introduced to aggregate the low-level to high-level features from the decoder, which improves segmentation performance. The proposed method validates its performance on three datasets: KOMATSUNA dataset, Multi-Modality Plant Imagery Dataset (MSU-PID), and Computer Vision for Plant Phenotyping dataset (CVPPP). The proposed approach outperforms the existing state-of-the-art methods UNet, UNet++, Residual-UNet, InceptionResv2-UNet, and DeeplabV3 leaf segmentation results achieve best dice (BestDice): 83.44, 71.17, 78.27 and Foreground-Background Dice (FgBgDice): 97.48, 91.35, 96.38 on KOMATSUNA, MSU-PID, and CVPPP dataset respectively. In addition, for leaf counting the results are difference in count (DiffFG): 0.11, 0.03, 0.12 and Absolute Difference in count (AbsDiffFG): 0.21, 0.38, 1.27 on KOMATSUNA, MSU-PID, and CVPPP dataset respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call