Abstract

As one of the important methods for sample generation, Generative Adversarial Networks (GAN) can generate samples based on the data distribution in any given dataset. However, it suffers from problems such as blurred texture, unstable training process and collapse of patterns in the actual training process. To address these problems, a sample generation method based on Deep Convolutional Generative Adversarial Networks (DCGAN) combined with Residual Networks (ResNet) is designed. The method uses residual and convolutional networks to construct generative and discriminative models, the deep residual network can recover rich image textures and effectively alleviate the instability of adversarial network training and the occurrence of pattern collapse. The performance of the algorithm is tested on the Butterfly Image Classification 50 species datasets with loss function curves and FID scores, then the network model is applied to the butterfly-like insect datasets for data augmentation. And we use the two classic deep convolutional neural network models to train and identify the samples. The results show that the proposed method can improve the accuracy of the two models in classification by 1.84% and 2.78% on the butterfly class datasets. The experiment results confirm that the improved DCGAN can effectively generate butterfly-like image data and improve the accuracy of the deep neural network model in classifying butterfly images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call