Abstract

Grain count is an important trait in sorghum because it is highly correlated to the potential yield. By accurately phenotyping the number of grains per panicle, farmers and agronomists can better monitor crop development. Additionally, mapping the spatial variability of grain count can help identify areas of the field with higher or lower potential yields, allowing for targeted management strategies. This study introduces a method for predicting grain count for sorghum panicles by employing a deep learning-based regression framework for point clouds and Red Green Blue (RGB) images. The framework integrates global features derived from a point cloud model of the panicle and grain counts detected from a sequence of RGB images. The models were evaluated on a paired dataset of point cloud models and RGB images collected for 147 sorghum panicles, which included a variety of panicle structures and grain counts. The point cloud models were constructed via a proximal structure-from-motion-based photogrammetry workflow. The model uses PointNet as the backbone network for processing the point clouds and YoloV5 for detecting grains from RGB images. Following the grain detection step, a scaled dot product attention module is integrated into the network to process the grain counts obtained from the RGB image sequence. Finally, the global features for the point cloud model and the grain counts are combined to predict the total grain count for the panicle. Furthermore, the models are also evaluated on downscaled low-resolution point clouds to assess their potential to be adapted in the future for point cloud models for panicles acquired in the field. The models were able to predict grain counts for the high-resolution point cloud dataset with a mean absolute percent error of 6.5% and 6.8% for the low-resolution point cloud dataset. The results serve as a proof of concept to demonstrate the viability of using a multimodal approach based on point clouds and RGB images to estimate grain count per panicle. Additional enhancements to the model like the inclusion of a module to register the point cloud and the RGB images, and evaluating more point cloud backbone networks can help further strengthen the method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call