Abstract
Currently, Generative Adversarial Networks (GANs) is considered as the best method to solve the challenge of synthesizing realistic images from sketch images. However, the effectiveness of this method depends mainly on setting up a loss function to learn the mapping between sketches and realistic images. This leads to how to choose an optimal loss function to map them. In this paper, we investigate and propose a loss function that combines pixel-based error and context-based error on a proper ratio to obtain the best training result. The proposed loss function will be utilized to train the generator’s U-Net architecture in greater detail. To convert a drawing to an actual image, the trained architecture will be applied. Based on two metrics that are the Structural Similarity Index (SSIM) and visual observations, the assessment results on the CUHK Face Sketch Database (CUFS), AR database (AR), and the CUHK ColorFERET Sketch Database (CUFSF) prove that the suggested method is feasible.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.