Apples are among the most popular fruits globally due to their health and nutritional benefits for humans. Artificial intelligence in agriculture has advanced, but vision, which improves machine efficiency, speed, and production, still needs to be improved. Managing apple development from planting to harvest affects productivity, quality, and economics. In this study, by establishing a vision system platform with a range of camera types that conforms with orchard standard specifications for data gathering, this work provides two new apple collections: Orchard Fuji Growth Stages (OFGS) and Orchard Apple Varieties (OAV), with preliminary benchmark assessments. Secondly, this research proposes the orchard apple vision transformer method (POA-VT), incorporating novel regularization techniques (CRT) that assist us in boosting efficiency and optimizing the loss functions. The highest accuracy scores are 91.56 % for OFGS and 94.20 % for OAV. Thirdly, an ablation study will be conducted to demonstrate the importance of CRT to the proposed method. Fourthly, the CRT outperforms the baselines by comparing it with the standard regularization functions. Finally, time series analyses predict the ‘Fuji’ growth stage, with the outstanding training and validation RMSE being 19.29 and 19.26, respectively. The proposed method offers high efficiency via multiple tasks and improves the automation of apple systems. It is highly flexible in handling various tasks related to apple fruits. Furthermore, it can integrate with real-time systems, such as UAVs and sorting systems. This research benefits the growth of apple’s robotic vision, development policies, time-sensitive harvesting schedules, and decision-making.
Read full abstract