Abstract

In recent years, Generative Adversarial Networks (GANs) and their variants, such as pix2pix, have occupied a significant position in the field of image generation. Despite the impressive performance of the pix2pix model in image-to-image transformation tasks, its reliance on a large amount of paired training data and computational resources has posed a crucial constraint to its broader application. To address these issues, this paper introduces a novel algorithm, Keywords-Based Conditional Image Transformation (KB-CIT). KB-CIT dynamically extracts keywords from the input grayscale images to acquire and generate training data, thus avoiding the need for a large amount of paired data and significantly improving the efficiency of image transformation. Experimental results demonstrate that KB-CIT performs remarkably well in image colorization tasks and can generate high-quality colored images even with limited training data. This algorithm not only simplifies the data collection process but also exhibits significant advantages in terms of computational resource requirements, data utilization efficiency, and personalized real-time training of the model, thereby providing new possibilities for the widespread application of the pix2pix model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call