Abstract

Lentil and field pea are each commonly marketed as split and dehulled product. For plant-breeding programmes, the genetic improvement in split-yield is a targeted trait. However, the standard laboratory method for assessment of split-yield requires milled grain to be manually sorted into split and dehulled fractions. This process is time-consuming and impacts the number of germplasm lines that can be evaluated. A machine vision approach, based on artificial neural networks, was proposed to classify split and dehulled fractions from multispectral images of grains. Three neural networks were trained on different inputs derived from the images. The networks were: (1) a convolutional network trained on the full images, (2) a convolutional network trained on distributions of image-features, and (3) a fully connected network trained on mean and standard deviation values of image-features. The accuracy and training times were compared to determine the trade-offs between training networks with smaller inputs for computational efficiency and full-image inputs for accuracy. The networks with reduced input-data dimensionality completed network training and predictions in half the time of the image-based network. The convolutional network based on the distributions of image-features achieved a validation accuracy of 88.1%. On average, this was 1.6% greater than the image-based convolutional network and 4.6% greater than the fully connected network based on simple (mean and standard deviation) features. Feature-distributions extracted from the multispectral images captured the diversity of image data required to differentiate milling categories, leading to gains in computational efficiency over the image-based network without loss of network generality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call