Early detection of green fruits from colour images is important for precise growth status monitoring and production estimation, allowing effective improvement of fruit quality and optimisation of orchard management. Due to the similar colours of the fruit skin and background, variable illumination and occlusions, algorithmic recognition of green apple targets in natural scenes is difficult. To effectively address this problem, a GrabCut model based on the visual attention mechanism was designed for preliminary fruit region extraction of the salient fruits, with a strong anti-noise advantage. For the overlapping fruit targets, the Ncut algorithm was applied to accurately segment the extracted fruits. After fruit segmentation, the three-point circle fitting method was developed to reconstruct each segmented apple target. A total of 200 green apple images were tested in this study. The improved GrabCut algorithm was compared against the standard GrabCut, GBVS (graph-based visual saliency), AIM (attention based on information maximisation), SDSR (saliency detection by self-resemblance), MR (manifold ranking), mean-shift and K-means clustering algorithms. The F1 score of the improved GrabCut model for fruit region extraction was 94.08%, which was 1.92%, 40.62%, 46.98%, 33.58%, 13.89%, 12.89% and 33.74% higher than the scores of the other seven algorithms, respectively. Among them, the improved GrabCut model achieved 100.00% recognition of the 200 apple images. Moreover, a F1 score of 94.12% was achieved by the proposed segmentation and recognition method, and the detection error was 7.37%. Experimental results showed that the proposed method could accurately recognise green apples under natural light conditions.
Read full abstract