Abstract

A procedure for identifying apples in night-time orchard images was developed and tested on two datasets totalling over 550 images of Golden Delicious trees captured on two years with different cameras and lighting systems. The analysis started by detecting specular reflection highlights and extracting sub-images (101 by 101 pixels) centred at those local maxima. Each sub-image was reduced to a 676 Upright Speeded Up Robust Features (U-SURF) vector. Close to 20,000 sub-images from one dataset were manually labelled as “apple” or “not apple”. The latter group included parts of leaves, branches and other objects which exhibited strong specular reflection. Seventy-two classifiers were trained with the number of “apple” and “not apple” training samples ranging from 500 to 2000 and from 5000 to 10,000, respectively, and with vocabulary size ranging from 500 to 10,000. Misclassifications occurred mostly in dark and low contrast regions, which led to developing alternate models based on the posterior probability that the classification result was correct taking into account the sub-image entropy or intensity. Yield models were calibrated for each dataset, using 20 random trees. For both datasets the overall yield estimate was within 10% of the actual yield, and the standard deviation was around 30% of the average tree yield. These results are similar to those reported in previous studies, but while these previous studies used procedures calibrated and tested with images from the same dataset, in the present study the classifier trained with images from one dataset was successfully applied to the second dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call