Abstract
With the introduction of multi-camera systems in modern plant phenotyping new opportunities for combined multimodal image analysis emerge. Visible light (VIS), fluorescence (FLU) and near-infrared images enable scientists to study different plant traits based on optical appearance, biochemical composition and nutrition status. A straightforward analysis of high-throughput image data is hampered by a number of natural and technical factors including large variability of plant appearance, inhomogeneous illumination, shadows and reflections in the background regions. Consequently, automated segmentation of plant images represents a big challenge and often requires an extensive human-machine interaction. Combined analysis of different image modalities may enable automatisation of plant segmentation in “difficult” image modalities such as VIS images by utilising the results of segmentation of image modalities that exhibit higher contrast between plant and background, i.e. FLU images. For efficient segmentation and detection of diverse plant structures (i.e. leaf tips, flowers), image registration techniques based on feature point (FP) matching are of particular interest. However, finding reliable feature points and point pairs for differently structured plant species in multimodal images can be challenging. To address this task in a general manner, different feature point detectors should be considered. Here, a comparison of seven different feature point detectors for automated registration of VIS and FLU plant images is performed. Our experimental results show that straightforward image registration using FP detectors is prone to errors due to too large structural difference between FLU and VIS modalities. We show that structural image enhancement such as background filtering and edge image transformation significantly improves performance of FP algorithms. To overcome the limitations of single FP detectors, combination of different FP methods is suggested. We demonstrate application of our enhanced FP approach for automated registration of a large amount of FLU/VIS images of developing plant species acquired from high-throughput phenotyping experiments.
Highlights
With the rise of high-throughput multi-camera systems during the past decades, modern phenotyping facilities provide biologists with ever growing amount of multimodal image data
The number of the detected feature points, putatively matching and accepted FP pairs as well as the success rate and the overlap ratios of single FP detectors was assessed. The results of these tests showed that all seven FP algorithms exhibit unsatisfactory performance using the MATLAB default set of parameters
Our systematic tests with other FP detectors indicated that no single detector is capable to identify sufficient number of corresponding FP pair for original FLU/visible light spectrum (VIS) images using the MATLAB default parameters
Summary
With the rise of high-throughput multi-camera systems during the past decades, modern phenotyping facilities provide biologists with ever growing amount of multimodal image data. Additional spatial information is required for reliable segmentation of plant structures. For this purpose, combination of different image modalities can be used. In order to perform a combined analysis of FLU and VIS images taken by cameras of different spatial resolution from different positions they have to be geometrically aligned. Manual registration of one test FLU/VIS image pair was suggested for derivation of a relative geometric transformation which is applied for all subsequent images of the same experiment [1]. Due to a number of factors such as daytime variation of room temperature, different plant sizes, as well as different varying distances between camera and plants for different rotation angles geometric transformations required for FLU/VIS image registration undergo changes in course of times. Every FLU/VIS image pair has to be ideally aligned anew
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.