Abstract

The primary motivation of image-to-image transformation is to convert an image of one domain to another domain. The Generative Adversarial Network (GAN) is the recent trend for image-to-image transformation. The existing GAN models suffer due to the lack of utilization of proper synthesization objectives. In this paper, we propose a new Cyclic-Synthesized Generative Adversarial Networks (CSGAN) for the development of expert and intelligent systems for image-to-image transformation. The proposed CSGAN uses a new objective function based on the proposed cyclic-synthesized loss between the synthesized image of one domain and cycled image of another domain. The proposed CSGAN enforces the mapping from one domain to another domain more accurately by limiting the scope of redundant transformation with the help of the cyclic-synthesized loss. The performance of the proposed CSGAN is evaluated on four benchmark image-to-image transformation datasets, including CUHK Face dataset, WHU-IIP Thermal-Visible Face Dataset, CMP Facades dataset, and NYU-Depth Dataset. The results are computed using the widely used evaluation metrics such as MSE, SSIM, PSNR, and LPIPS. The experimental results of the proposed CSGAN approach are compared with the latest state-of-the-art approaches, such as GAN, Pix2Pix, DualGAN, CycleGAN, and PS2GAN. The proposed CSGAN technique outperforms all the methods over CUHK dataset, WHU-IIP dataset, NYU-Depth dataset, and exhibits promising and comparable performance over Facades dataset in terms of both qualitative and quantitative measures. The code is available at https://github.com/KishanKancharagunta/CSGAN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call