Abstract
Face sketch synthesis aims to generate a face sketch image from a corresponding photo image and has wide applications in law enforcement and digital entertainment. Despite the remarkable achievements that have been made in face sketch synthesis, most existing works pay main attention to the facial content transfer, at the expense of facial detail information. In this paper, we present a new generative adversarial learning framework to focus on detail preservation for realistic face sketch synthesis. Specifically, the high-resolution network is modified as generator to transform a face image from photograph to sketch domain. Except for the common adversarial loss, we design a detail loss to force the synthesized face sketch images have proximate details to its corresponding photo images. In addition, the style loss is adopted to restrain the synthesized face sketch images have vivid sketch style as the hand-drawn sketch images. Experimental results demonstrate that the proposed approach achieves superior performance, compared to state-of-the-art approaches, both on visual perception and objective evaluation. Specifically, this study indicated the higher FSIM values (0.7345 and 0.7080) and Scoot values (0.5317 and 0.5091) than most comparison methods on the CUFS and CUFSF datasets, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.