Abstract

Face sketch synthesis aims to generate a face sketch image from a corresponding photo image and has wide applications in law enforcement and digital entertainment. Despite the remarkable achievements that have been made in face sketch synthesis, most existing works pay main attention to the facial content transfer, at the expense of facial detail information. In this paper, we present a new generative adversarial learning framework to focus on detail preservation for realistic face sketch synthesis. Specifically, the high-resolution network is modified as generator to transform a face image from photograph to sketch domain. Except for the common adversarial loss, we design a detail loss to force the synthesized face sketch images have proximate details to its corresponding photo images. In addition, the style loss is adopted to restrain the synthesized face sketch images have vivid sketch style as the hand-drawn sketch images. Experimental results demonstrate that the proposed approach achieves superior performance, compared to state-of-the-art approaches, both on visual perception and objective evaluation. Specifically, this study indicated the higher FSIM values (0.7345 and 0.7080) and Scoot values (0.5317 and 0.5091) than most comparison methods on the CUFS and CUFSF datasets, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call