Deep learning algorithms have the advantages of clear structure and high accuracy in image recognition. Accurate identification of pests and diseases in crops can improve the pertinence of pest control in farmland, which is beneficial to agricultural production. This paper proposes a DCNN‐G model based on deep learning and fusion of Google data analysis, using this model to train 640 data samples, and then using 5000 test samples for testing, selecting 80% as the training set and 20% as the test set, and compare the accuracy of the model with the conventional recognition model. Research results show that after degrading a quality level 1 image using the degradation parameters above, 9 quality level images are obtained. Use YOLO’s improved network, YOLO‐V4, to test and validate images after quality level classification. Images of different quality levels, especially images of adjacent levels, are subjectively observed by human eyes, and it is difficult to distinguish the quality of the images. Using the algorithm model proposed in this article, the recognition accuracy is 95%, which is much higher than the basic 84% of the DCNN model. The quality level classification of crop disease and insect pest images can provide important prior information for the understanding of crop disease and insect pest images and can also provide a scientific basis for testing the imaging capabilities of sensors and objectively evaluating the image quality of crop diseases and pests. The use of convolutional neural networks to realize the classification of crop pest and disease image quality not only expands the application field of deep learning but also provides a new method for crop pest and disease image quality assessment.