Abstract

Deep learning has been successfully showing promising results in plant disease detection, fruit counting, yield estimation, and gaining an increasing interest in agriculture. Deep learning models are generally based on several millions of parameters that generate exceptionally large weight matrices. The latter requires large memory and computational power for training, testing, and deploying. Unfortunately, these requirements make it difficult to deploy on low-cost devices with limited resources that are present at the fieldwork. In addition, the lack or the bad quality of connectivity in farms does not allow remote computation. An approach that has been used to save memory and speed up the processing is to compress the models. In this work, we tackle the challenges related to the resource limitation by compressing some state-of-the-art models very often used in image classification. For this we apply model pruning and quantization to LeNet5, VGG16, and AlexNet. Original and compressed models were applied to the benchmark of plant seedling classification (V2 Plant Seedlings Dataset) and Flavia database. Results reveal that it is possible to compress the size of these models by a factor of 38 and to reduce the FLOPs of VGG16 by a factor of 99 without considerable loss of accuracy.

Highlights

  • Deep learning (DL) is playing a crucial role in precision agriculture to improve the yield of the farm [1,2,3]

  • In contrast to the works of [10,11], which focus on segmentation, this paper focuses on classification, which is a more common task in the application of DL in agriculture

  • To evaluate the performances of model pruning and quantization as model compression for agricultural applications, we focused on the reduction of the memory footprint and the speed-up

Read more

Summary

Introduction

Deep learning (DL) is playing a crucial role in precision agriculture to improve the yield of the farm [1,2,3]. Applying deep learning in agriculture involves the acquisition and processing of a large amount of data related to crop. Due to the huge amount of parameters, DL models are usually inefficient on low-cost devices with limited resources [4] As a result, they are usually deployed on remote servers. Many rural areas that do have connectivity have low bandwidths that only allow limited data traffic [7,8] This can increase the response time since DL models process a huge amount of data. Authors designed a model compression technique based on separable convolution and singular value decomposition They applied their technique on very deep convolutional neural networks (CNNs) for plant image segmentation, with the aim of deploying the new model infield.

State-of-the-Art Models
Deep Learning Models Constraints
Related Works on Models Compression
Parameter Pruning
Pruning Schedule
Salience Criteria
Granulation
Quantization
Low-Rank Factorization
Separable Convolution
Knowledge Distillation
Model Compression Metric
Pruning
Compression of the Model
Datasets
Experimentation Setup
Experimentation Setting 1
Experimentation Setting 2
Experimentation Setting 3
Evaluation
Pruning with Same Pruning Ratio per Layer
Only Fully Connected Layers Pruning
Weights
Filter Pruning
Input Layer Resizing
Conclusions and Future Work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.