Abstract

We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

Highlights

  • The objective of an automated visual system is to provide expert-level pest insect recognition with minimal operator training[7]

  • We addressed several common limits of these systems, as follows: (i) the requirement of a large training set; we collected a large amount of natural images from Internet. (ii) input images of fixed size; we introduced a recently developed method, “global contrast based salient region detection”[40], to automatically localize and resize regions containing pest insect objects to an equal scale, and constructed a standard database Pest ID for training deep convolutional neural network (DCNN). (iii) optimization difficulties; we varied several critical parameters and powerful regularization strategies, including size, number and convolutional stride of local receptive fields, dropout ratio[41] and the final loss function, to seek the best configuration of DCNN

  • We have demonstrated the effectiveness of using a saliency map-based approach for localizing pest insect objects in natural images

Read more

Summary

Neural Network

We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. Deep convolutional neural networks (DCNNs) have provided theoretical answers to these questions[34,35], and have been reported to achieve state-of-the-art performance on many other image recognition tasks[36,37,38] Their deep architectures, combined with good weight quantization schemes, optimization algorithms, and initialization strategies, allow excellent selectivity for complex, high level features that are robust to irrelevant input transformations, leading to useful representations that allow classification[39]. (iii) optimization difficulties; we varied several critical parameters and powerful regularization strategies, including size, number and convolutional stride of local receptive fields, dropout ratio[41] and the final loss function, to seek the best configuration of DCNN In performing these tests, we were able to address DCNN’s practical utility for pest control of a paddy field, and we discussed the effects of reducing our architecture on runtime and performance. The bounding boxes containing the pest insect objects are extended as squares (Fig. 1(e)), and cropped from

Species Quantity Species Quantity
Validation Accuracy
Classifier SVM SVM Fisher
Conclusion and future work
Author Contributions
Findings
Additional Information
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call