Abstract

<p indent=0mm>Sketch-based image retrieval (SBIR) aims to return a collection of corresponding images based on an input sketch. Different from traditional content-based image retrieval, unique difficulties exist due to the large domain gap between sketches and natural images. Based on the similarity between edgemaps and sketches, a novel SBIR model named spatial attentive edgemap fusion is presented which combines both image and edgemap features. Images and the corresponding edgemaps are first encoded to their own latent feature space, and then fused by a learned spatial attention map. Experiment results on two widely-used SBIR datasets, Sketchy and Flickr15K, show the promising performance of the proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call