Using a Generative Adversarial Network (GAN) has proven its ability to successfully implement realistic images in image translation fields. It has its successful implementation in the sketch-to-image translation, too. Generative adversarial networks are widely used for the purpose of image translation. Most discriminators in generative adversarial networks use encoder or decoder blocks for image segmentation and classification tasks. U-net-based architecture is mostly used in the generator but rarely in the discriminator. If used in the discriminator, it is used for image resolution increment and segmentation tasks. In this research, a U-net-based discriminator is used for image translation tasks. U-net-based discriminator uses local and global differences between the real and fake images, which helps maintain global and local data representation. Resnet-9, used in the generator, uses skip connections, shortcuts, and concatenations, enabling information to flow from earlier to later layers. This helps preserve the original image features and solves the vanishing gradient problems in normal generators. The use of a strong discriminator and effective generator helps in the improvement system's performance. The available dataset was unpaired at the same time. Datasets from various sources were combined and formed a sketch-image pair. The input is a 512x256 human sketch and a corresponding real image pair. The image pair is split into sketch and image with dimensions 256x256. The system's output is the human face image of the corresponding sketch.
Read full abstract