Abstract

Generative models have recently become a prominent research topic in the field of artificial intelligence. Among these models, Generative Adversarial Networks (GAN) have revolutionized the field of deep learning by enabling the production of high-quality synthetic data that is very similar to real-world data. However, the effectiveness of GANs largely depends on the size and quality of training data. In many real-world applications, collecting large amounts of high-quality training data is impractical, time-consuming, and expensive. Accordingly, in recent years, there has been intense interest in the development of GAN models that can work with limited data. These models are particularly useful in scenarios where available data is sparse, such as medical imaging, or in creative applications such as creating new works of art. In this study, we propose a GAN model that can learn from a single training image. Our model is based on the principle of multiple GANs operating sequentially at different scales. At each scale, the GAN learns the features of the training image in different dimensions and transfers them to the next GAN. Samples produced by the GAN at the finest scale are images that have the characteristics of the training image but have different realistic structures. In our model, we utilized a self-attention module to increase the realism and quality of the generated images. Additionally, we used a new scaling method to increase the success of the model. The quantitative and qualitative results we obtained from our experimental studies show that our model performs image generation successfully. In addition, we demonstrated the robustness of our model by testing its success in different image manipulation applications. As a result, our model can successfully produce realistic, high-quality, diverse images from a single training image, providing short training time, memory efficiency, and good training stability. Our model is flexible enough to be used in areas where limited data needs to be worked on.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call