Abstract

SummaryDirect observation of morphological plant traits is tedious and a bottleneck for high‐throughput phenotyping. Hence, interest in image‐based analysis is increasing, with the requirement for software that can reliably extract plant traits, such as leaf count, preferably across a variety of species and growth conditions. However, current leaf counting methods do not work across species or conditions and therefore may lack broad utility. In this paper, we present Pheno‐Deep Counter, a single deep network that can predict leaf count in two‐dimensional (2D) plant images of different species with a rosette‐shaped appearance. We demonstrate that our architecture can count leaves from multi‐modal 2D images, such as visible light, fluorescence and near‐infrared. Our network design is flexible, allowing for inputs to be added or removed to accommodate new modalities. Furthermore, our architecture can be used as is without requiring dataset‐specific customization of the internal structure of the network, opening its use to new scenarios. Pheno‐Deep Counter is able to produce accurate predictions in many plant species and, once trained, can count leaves in a few seconds. Through our universal and open source approach to deep counting we aim to broaden utilization of machine learning‐based approaches to leaf counting. Our implementation can be downloaded at https://bitbucket.org/tuttoweb/pheno-deep-counter.

Highlights

  • Image-based plant phenotyping has recently become a valuable tool for quantitative analysis of plant images

  • To showcase the performance of our approach, we employed four different datasets: (i) A special collection from the PRL dataset (Minervini et al, 2016) and Aberystwyth dataset (Bell and Dee, 2016) that was used in the latest CVPPP 2017 Leaf Counting Challenge (LCC)1; it contains five different sub-datasets

  • (ii) The multi-modality imagery database for plant phenotyping (Cruz et al, 2016), containing images of A. thaliana Col-0 acquired in three different modalities (RGB, NIR, FMP); (iii) The RGB images in the komatsuna dataset (Uchiyama et al, 2017); (iv) Nocturnal Arabidopsis plant images acquired using a NIR camera (Dobrescu et al, 2017b)

Read more

Summary

Introduction

Image-based plant phenotyping has recently become a valuable tool for quantitative analysis of plant images. Its rapid expansion has highlighted the need for reliable software solutions with the power to analyze data efficiently (Gehan et al, 2017). While the bottleneck was previously thought to be the acquisition of imaging data (i.e. the hardware; Furbank and Tester, 2011), it has recently shifted to a lack of reliable software (and algorithms) (Minervini et al, 2015a), due to the sheer number of imaging data that need to be analyzed to extract quantitative plant traits. IPK (Pape and Klukas, 2015a) uses color images to extract geometrical representations of the isolated plant to find suitable split points to separate each leaf, relying on assumptions about plant shape and

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call