Abstract

This paper considers the problem of face sketch synthesis in the wild, which transforms a face photo into a face sketch. Face sketch synthesis is widely applied in law enforcement as well as digital entertainment fields. However, the existing methods either focus on hand-crafted techniques where prior human experience is relied on or adopt deep learning techniques as an end-to-end framework, where facial details cannot be well represented. In this paper, we propose a novel approach for face sketch synthesis in the wild via a deep patch representation-based probabilistic graphical model (DeepPGM). A Siamese network is constructed to extract deep patch representation from a raw facial patch, where the representative detail information for robust face sketch synthesis can be exploited. The generated deep patch representation and facial image patches are then optimally combined through a probabilistic graphical model. The proposed DeepPGM approach not only outperforms the state-of-the-art on public face sketch datasets but also can cope with forensic photos in the wild conditions, including varying lightings, poses, occlusions, skin colors, and ethnic origins. The superiority of the proposed method is demonstrated by extensive experiments on two public face sketch datasets and real-world forensic photos in the wild.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call