Abstract

Sketch-based image retrieval (SBIR) technique has progressed by deep learning to learn cross-modal distance metrics that relate sketches and photos from a large number of sketch-photo pairs. However, datasets of sketch-photo pairs are small, as acquisition of a large number of such pairs is expensive. To alleviate the issue, data augmentation via image transformation such as scaling, flipping, rotation, and deformation has been widely adopted. Still, insufficiency in training set seems to have impeded deep learning from achieving its full potential for SBIR. In this paper, we propose a novel data augmentation approach dedicated for SBIR. A deep neural network called Photo2Sketch (P2S) converts photos into line drawings that are visually similar to those sketched by human. An artificially augmented training dataset of sketch-photo pairs is generated at low cost by feeding photos from a large image corpus into the P2S. Experiments evaluate quality of sketch-like images generated by the P2S as well as efficacy of the proposed data augmentation algorithm under SBIR scenario. In particular, retrieval accuracy is significantly improved when the proposed algorithm is combined with the data augmentation by image transformation

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.