Abstract

With the advent of touch screens in mobile devices, sketch-based image search is becoming the most intuitive method to query multimedia contents. Traditionally, sketch-based queries were formulated with hand-drawn shapes without any shades or colors. The absence of such critical information from sketches increased the ambiguity between natural images and their sketches. Although it was previously considered too cumbersome for users to add colors to hand-drawn sketches in image retrieval systems, the modern day touch input devices make it convenient to add shades or colors to query sketches. In this work, we propose deep neural codes extracted from partially colored sketches by an efficient convolutional neural network (CNN) fine-tuned on sketch-oriented augmented dataset. The training dataset is constructed with hand-drawn sketches, natural color images, de-colorized, and de-texturized images, coarse and fine edge maps, and flipped and rotated images. Fine-tuning CNN with augmented dataset enabled it to capture features effectively for representing partially colored sketches. We also studied the effects of shading and partial coloring on retrieval performance and show that the proposed method provides superior performance in sketch-based large-scale image retrieval on mobile devices as compared to other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call