BackgroundThe early and specific detection of abiotic and biotic stresses, particularly their combinations, is a major challenge for maintaining and increasing plant productivity in sustainable agriculture under changing environmental conditions. Optical imaging techniques enable cost-efficient and non-destructive quantification of plant stress states. Monomodal detection of certain stressors is usually based on non-specific/indirect features and therefore is commonly limited in their cross-specificity to other stressors. The fusion of multi-domain sensor systems can provide more potentially discriminative features for machine learning models and potentially provide synergistic information to increase cross-specificity in plant disease detection when image data are fused at the pixel level.ResultsIn this study, we demonstrate successful multi-modal image registration of RGB, hyperspectral (HSI) and chlorophyll fluorescence (ChlF) kinetics data at the pixel level for high-throughput phenotyping of A. thaliana grown in Multi-well plates and an assay with detached leaf discs of Rosa × hybrida inoculated with the black spot disease-inducing fungus Diplocarpon rosae. Here, we showcase the effects of (i) selection of reference image selection, (ii) different registrations methods and (iii) frame selection on the performance of image registration via affine transform. In addition, we developed a combined approach for registration methods through NCC-based selection for each file, resulting in a robust and accurate approach that sacrifices computational time. Since image data encompass multiple objects, the initial coarse image registration using a global transformation matrix exhibited heterogeneity across different image regions. By employing an additional fine registration on the object-separated image data, we achieved a high overlap ratio. Specifically, for the A. thaliana test set, the overlap ratios (ORConvex) were 98.0 ± 2.3% for RGB-to-ChlF and 96.6 ± 4.2% for HSI-to-ChlF. For the Rosa × hybrida test set, the values were 98.9 ± 0.5% for RGB-to-ChlF and 98.3 ± 1.3% for HSI-to-ChlF.ConclusionThe presented multi-modal imaging pipeline enables high-throughput, high-dimensional phenotyping of different plant species with respect to various biotic or abiotic stressors. This paves the way for in-depth studies investigating the correlative relationships of the multi-domain data or the performance enhancement of machine learning models via multi modal image fusion.
Read full abstract