Image-to-image translation is the process of transforming images from different domains. Generative Adversarial Networks (GANs), and Convolutional Neural Networks (CNNs) are widely used in image translation. This study aims to find the most effective loss function for GAN architectures and synthesize better images. For this, experimental results were obtained by changing the loss functions on the Pix2Pix method, one of the basic GAN architectures. The exist loss function used in the Pix2Pix method is the Mean Absolute Error (MAE). It is called the L_1metric. In this study, the effect of convolutional-based perceptual similarity CONTENT, LPIPS, and DISTS metrics on image-to-image translation was applied on the loss function in Pix2Pix architecture. In addition, the effects on image-to-image translation were analyzed using perceptual similarity metrics ( L_1_CONTENT, L_1_LPIPS, and L_1_DISTS) with the original L_1 loss at a rate of 50%. Performance analyzes of the methods were performed with the Cityscapes, Denim2Mustache, Maps, and Papsmear datasets. Visual results were analyzed with conventional (FSIM, HaarPSI, MS-SSIM, PSNR, SSIM, VIFp and VSI) and up-to-date (FID and KID) image comparison metrics. As a result, it has been observed that better results are obtained when convolutional-based methods are used instead of conventional methods for the loss function of GAN architectures. It has been observed that LPIPS and DISTS methods can be used in the loss function of GAN architectures in the future