Color-tone represents the prominent color of an image, and training generative adversarial nets (GAN) to change color-tones of generated images is desirable in many applications. Advances such as HistoGAN can manipulate color-tones of generated images with a target image. Yet, there are challenges. Kullback-Leibler (KL) divergence adopted by HistoGAN might bring the color-tone mismatching, because it is possible to provide infinite score to a generator. Moreover, only relying on distribution estimation also produces images with lower fidelity in HistoGAN. To address these issues, we propose a new approach, named dynamic weights GAN (DW-GAN). We use two discriminators to estimate the distribution matching degree and details' similarity, with Laplacian operator and Hinge loss. Laplacian operator can help capture more image details, while Hinge loss is deduced from mean difference (MD) that could avoid the case of infinite score. To synthesize desired images, we combine the loss of the two discriminators with generator loss and set the weights of the two estimated scores to be dynamic through the previous discriminators' outputs, given that the training signal of a generator is from a discriminator. Besides, we innovatively integrate the dynamic weights into other GAN variants (e.g., HistoGAN and StyleGAN) to show the improved performance. Finally, we conduct extensive experiments on one industrial Fabric and seven public datasets to demonstrate the significant performance of DW-GAN in producing higher fidelity images and achieving the lowest Frechet inception distance (FID) scores over SOTA baselines.
Read full abstract