Abstract: Sketch-based face recognition is an interesting task in vision and multimedia research yet it is quite challenging due to the great difference between face photos and sketches. In this paper we propose a novel approach for photo-sketch generation aiming to automatically transform face photos into detail-preserving personal sketches. Unlike the traditional models synthesizing sketches based on a dictionary of exemplars we develop a fully convolutional network to learn the end-to-end photosketch mapping. Our approach takes whole face photos as inputs and directly generates the corresponding sketch images with efficient inference and learning in which the architecture is stacked by only convolutional kernels of very small sizes. The exemplar-based method is most frequently used in face sketch synthesis because of its efficiency in representing the nonlinear mapping between face photos and sketches. However, the sketches synthesized by existing exemplar-based methods suffer from block artifacts and blur effects. In addition, most exemplar-based methods ignore the training sketches in the weight representation process. To improve synthesis performance, a novel joint training model is proposed in this paper, taking sketches into consideration. First, we construct the joint training photo and sketch by concatenating the original photo and its sketch with a high-pass filtered image of their corresponding sketch. Then, an offline random sampling strategy is adopted for each test photo patch to select the joint training photo and sketch patches in the neighboring region. Finally, a novel locality constraint is designed to calculate the reconstruction weight, allowing the synthesized sketches to have more detailed information
Read full abstract