Abstract

Face sketch synthesis plays a crucial role in face recognition for law enforcement applications. However, the current face sketch synthesis approaches generate sketches from photos based on a model trained by a certain database that is usually collected from individuals of the same ethnicity, and therefore such sketches merely inherit distinct facial distributions (shape and texture) of this database. This also makes such models inapplicable for real-world applications which mainly include multiple photo variations such as pose, lighting, skin color, and ethnic origin. In this paper, a unified face sketch synthesis model considering ethnicity issue as well as photo variations is proposed. A new deep learning scheme is designed to handle the generic visual representation and global structure of the face. Towards the final objective, the recent success of deep residual blocks is exploited and incorporated into a plain feedforward network, termed as DResNet, to learn a regression model for face sketch synthesis. A heterogeneous database containing photos with lighting, ethnicity, hair and skin variations is utilized for training the DResNet model. Extensive subjective and objective evaluations showed the superiority of the proposed DResNet method on state-of-the-art face sketch synthesis methods. Experimental results also demonstrated that the proposed DResNet method can be generalized to face sketch synthesis for real-world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call