Abstract

Machine vision based on color, multispectral, and hyperspectral cameras to develop potato quality grading can be used to predict length, width, and mass, as well as defects on the interior and exterior of a sample. However, the images obtained by these cameras are limited by two-dimensional shape information, including width, length, and boundary. Other vital elements of appearance data related to potato mass and quality, including thickness, volume, and surface gradient changes are difficult to detect due to slight surface color differences and device limitations. In this study, we recorded the depth images of 110 potatoes using a depth camera, including samples with uniform shapes or with deformations (e.g., bumps and divots). A novel method was developed for estimating potato mass and shape information and three-dimensional models were built utilizing a new image processing algorithm for depth images. Other features, including length, width, thickness, and volume were also calculated as mass prediction related factors. Experimental results indicate that the proposed models accurately predict potato length, width, and thickness; the mean absolute errors for these predictions were 2.3mm, 2.1mm, and 2.4mm, respectively, while the mean percentage errors were 2.5%, 3.5%, and 4.4%. Mass prediction based on a 3D volume model for both normal and deformed potato samples proved to be more accurate compared to models based on area calculation. Thus 93% of samples were graded by the correct size group using the volume density model while only 73% were graded correctly using the area density. This depth image processing is an effective potential method for future non-destructive post-harvesting grading, especially for products where size, shape, and surface condition are important factors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call