Abstract

The development of machine vision-based technologies to replace human labor for rapid and exact detection of agricultural product quality has received extensive attention. In this study, we describe a low-rank representation of jointly multi-modal bag-of-feature (JMBoF) classification framework for inspecting the appearance quality of postharvest dry soybean seeds. Two categories of speeded-up robust features and spatial layout of L*a*b* color features are extracted to characterize the dry soybean seed kernel. The bag-of-feature model is used to generate a visual dictionary descriptor from the above two features, respectively. In order to exactly represent the image characteristics, we introduce the low-rank representation (LRR) method to eliminate the redundant information from the long joint two kinds of modal dictionary descriptors. The multiclass support vector machine algorithm is used to classify the encoding LRR of the jointly multi-modal bag of features. We validate our JMBoF classification algorithm on the soybean seed image dataset. The proposed method significantly outperforms the state-of-the-art single-modal bag of features methods in the literature, which could contribute in the future as a significant and valuable technology in postharvest dry soybean seed classification procedure.

Highlights

  • The development of machine vision-based technologies to replace human labor for rapid and exact detection of agricultural product quality has received extensive attention

  • The soybean seed quality can be measured in several ways, including the Raman spectroscopy[1,2], near-infrared spectroscopy[3,4], terahertz spectroscopy[5,6], high-performance liquid chromatography-mass spectrometry[7,8], capillary electrophoresis mass spectrometry[9,10], scanning electron microscope[11,12] and nuclear magnetic resonance[13,14] techniques

  • Xiao et al (2018) introduced a support vector machine (SVM) classifier for classifying four kinds of important southern vegetable pests based on scale-invariant feature transform (SIFT) BoF visual vocabulary[20]

Read more

Summary

Introduction

The development of machine vision-based technologies to replace human labor for rapid and exact detection of agricultural product quality has received extensive attention. Liu et al extracted the L*a*b* color features, three texture features of energy, entropy and contrast and eight shape features of perimeter, area, circularity, elongation, compactness, eccentricity, elliptic axle ratio and equivalent diameter as the input of BP artificial neural network and set up a three layers classifier for sorting six categories -mildewed, insect-damage, broken, skin-damaged, partly detective and normal soybean kernels[18] These previous methods used global visual characteristics of color, morphology, and texture to describe the soybean seeds. The state-of-the-art technologies of low-level local visual feature representation based on the bag-of-feature model showed great potential in object recognition. The method can organically merge the distinct category of semantic dictionaries by a generation of the new low-dimensional descriptors in low-rank subspace and eliminate the influence of irrelevant semantic dictionary information in such space

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call