Abstract
BackgroundAutomated segmentation of large amount of image data is one of the major bottlenecks in high-throughput plant phenotyping. Dynamic optical appearance of developing plants, inhomogeneous scene illumination, shadows and reflections in plant and background regions complicate automated segmentation of unimodal plant images. To overcome the problem of ambiguous color information in unimodal data, images of different modalities can be combined to a virtual multispectral cube. However, due to motion artefacts caused by the relocation of plants between photochambers the alignment of multimodal images is often compromised by blurring artifacts.ResultsHere, we present an approach to automated segmentation of greenhouse plant images which is based on co-registration of fluorescence (FLU) and of visible light (VIS) camera images followed by subsequent separation of plant and marginal background regions using different species- and camera view-tailored classification models. Our experimental results including a direct comparison with manually segmented ground truth data show that images of different plant types acquired at different developmental stages from different camera views can be automatically segmented with the average accuracy of 93% (SD=5%) using our two-step registration-classification approach.ConclusionAutomated segmentation of arbitrary greenhouse images exhibiting highly variable optical plant and background appearance represents a challenging task to data classification techniques that rely on detection of invariances. To overcome the limitation of unimodal image analysis, a two-step registration-classification approach to combined analysis of fluorescent and visible light images was developed. Our experimental results show that this algorithmic approach enables accurate segmentation of different FLU/VIS plant images suitable for application in fully automated high-throughput manner.
Highlights
Automated segmentation of large amount of image data is one of the major bottlenecks in highthroughput plant phenotyping
The registration-classification pipeline was applied to segment FLU/visible light (VIS) image pairs of arabidopsis, wheat and maize images stepwise including (i) pre-segmentation of FLU/VIS images, (ii) automated co-registration of pre-segmented FLU/VIS images and (iii) classification of plant and non-plant structures in VIS image regions
To remove marginal background regions in VIS images eight distinctive color models trained on all 288 case-scenarios including arabidopsis, wheat and maize plant/background appearance in different camera views and developmental stages were applied
Summary
Automated segmentation of large amount of image data is one of the major bottlenecks in highthroughput plant phenotyping. Dynamic optical appearance of developing plants, inhomogeneous scene illumination, shadows and reflections in plant and background regions complicate automated segmentation of unimodal plant images. Plant structures overgrow the optical field so that majority of pixels cannot be considered as background. Efficient and straightforward in algorithmic implementation color-distance methods become less reliable in presence of shadows and illumination changes. In such cases, reference images (i.e. background illumination without any plants) may substantially deviate from the background regions of plant-containing images. Adult plants with large and/or many leaves throw large shadows that alter original colors and intensity distribution of background regions and low-lying leaves
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.