Varietal purity is a critical quality indicator for seeds, yet various production processes can lead to the mixing of seeds from different varieties. Consequently, seed variety classification is an essential step in seed production. Existing classification algorithms often suffer from limitations such as reliance on single information sources, constrained feature extraction capabilities, time consumption, low accuracy, and the potential to cause irreversible damage to seeds. To address these challenges, this paper proposes a fast and non-destructive classification method for corn seeds, named DualTransAttNet, based on multi-source image information and hybrid feature extraction. High-resolution hyperspectral images of various corn varieties were collected, and a sliding sampling approach was employed to capture feature information across all spectral bands, resulting in the construction of a hyperspectral dataset for corn seed classification. Hyperspectral and RGB image data were then integrated to complement one another’s information and mitigate the insufficient feature diversity caused by single-source data. The proposed method leverages the strengths of convolutional neural networks (CNNs) and transformers to extract both local and global features, effectively capturing spectral and image characteristics. The experimental results demonstrate that the DualTransAttNet model can achieve a compact size of only 1.758 MB and an inference time of 0.019 ms. Compared to typical machine learning and deep learning models, the proposed model exhibits superior performance with an overall accuracy, F1-score, and Kappa coefficient of 90.01%, 88.9%, and 88.4%, respectively. The model’s rapid inference capability and low parameter count make it an excellent technical solution for agricultural automation and intelligent systems, thereby enhancing the efficiency and profitability of agricultural production.
Read full abstract