Abstract

AbstractThe crop seed grading method based on deep learning has achieved ideal recognition results. However, an effective deep neural network model for seed grading usually needs a relatively high computational complexity, memory space, or inference time, which critically hampers the utilization of complex CNNs on devices with limited computational resources. For this reason, a method of combining layer pruning and filter pruning is proposed to realize fast and high-purity seed grading. First, we propose an effective approach based on feature representation to eliminate redundant convolutional layers, which greatly reduces the model’s consumption of device storage resources. Then, the filter-level pruning based on the Taylor expansion criterion is introduced to further eliminate the redundant information existing in the convolutional layer. Finally, an effective and practical knowledge distillation technology (MEAL V2) is adopted to transfer knowledge of well-performing models, to compensate for the information loss caused by the pruning of the network. Experiments on red kidney bean datasets demonstrate that the method is effective and feasible. We proposed the Vgg_Beannet, which can achieve 4\(\times \) inference acceleration while the accuracy is only reduced by 0.13% when the filter is pruned by 90%. Moreover, we also compared some handcrafted lightweight architectures such as MobileNetv2, MixNet, etc. The results show that the pruned network outperforms the above network in inference time (2.07 ms vs. 7.83 ms, 22.23 ms) and accuracy (96.33% vs. 95.94%, 94.89%).KeywordsSeed gradingDeep learningNeural network pruningKnowledge distillation

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call