An automatic system for cantaloupe flower pollination in greenhouses is proposed to meet the requirements of automatic pollination. The system consists of a mobile platform, robotic manipulator, and camera that reaches the flowers to detect and recognise their external features. The main task of the vision system is to detect the position and orientation of the flower in Cartesian coordinates, allowing the manipulator to reach the pose and perform pollination. A comprehensive method to ensure the accuracy of the pollination process is proposed that accurately determines the position and orientation of cantaloupe flowers in real environments. The vision system is used to capture images, detect the flower, and recognise its state according to its external features, such as size, colour, and shape, thereby providing appropriate nozzle access during pollination. The proposed approach begins with a segmentation method designed to precisely locate and segment the target cantaloupe flowers. Subsequently, a mathematical model is used to determine the key points that are important for establishing the growth orientation of each flower. Finally, an inverse-projection method is employed to convert the position of the flower from a two-dimensional (2D) image into a three-dimensional (3D) space, providing the necessary position for the pollination robot. The experimental process is conducted in a laboratory and proves the efficacy of the cantaloupe flower segmentation method, yielding precision, recall, and F1 scores of 87.91%, 90.76%, and 89.31%, respectively. Furthermore, the accuracy of the growth-orientation prediction method reaches approximately 86.7%. Notably, positional errors in 3D space predominantly fall within the allowable range, resulting in a successful pollination rate of up to 83.1%.
Read full abstract