Abstract

Sketching has become fashionable with the increasing availability of touch-screens on portable devices. It is typically used for rendering the visual world, automatic sketch style recognition and abstraction, sketch-based image retrieval (SBIR), and sketch-based perceptual grouping. How to automatically generate a sketch from a real image remains an open question. We propose a convolutional neural network-based model, named SG-Net, to generate sketches from natural images. SG-Net is trained to learn the relationship between images and sketches and thus makes full use of edge information to generate a rough sketch. Then, mathematical morphology is further utilized as a postprocess to eliminate the redundant artifacts in the generated sketches. In addition, in order to increase the diversity of generated sketches, we introduce thin plate splines to generate more sketches with different styles. We evaluate the proposed method of sketch generation both quantitatively and qualitatively on the challenging dataset. Our approach achieves superior performance to the established methods. Moreover, we conduct extensive experiments on the SBIR task. The experimental results on the Flickr15k dataset demonstrate that our proposed method leverages the retrieval performance compared with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call