The use of deep learning techniques to identify grape leaf diseases relies on large, high-quality datasets. However, a large number of images occupy more computing resources and are prone to pattern collapse during training. In this paper, a depth-separable multifeature generative adversarial network (DMFGAN) was proposed to enhance grape leaf disease data. First, a multifeature extraction block (MFEB) based on the four-channel feature fusion strategy is designed to improve the quality of the generated image and avoid the problem of poor feature learning ability of the adversarial generation network caused by the single-channel feature extraction method. Second, a depth-based D-discriminator is designed to improve the discriminator capability and reduce the number of model parameters. Third, SeLU activation function was substituted for DCGAN activation function to overcome the problem that DCGAN activation function was not enough to fit grape leaf disease image data. Finally, an MFLoss function with a gradient penalty term is proposed to reduce the mode collapse during the training of generative adversarial networks. By comparing the visual indicators and evaluation indicators of the images generated by different models, and using the recognition network to verify the enhanced grape disease data, the results show that the method is effective in enhancing grape leaf disease data. Under the same experimental conditions, DMFGAN generates higher quality and more diverse images with fewer parameters than other generative adversarial networks. The mode breakdown times of generative adversarial networks in training process are reduced, which is more effective in practical application.
Read full abstract