Abstract

Sketch-based image retrieval (SBIR) has become a prominent research topic in recent years due to the proliferation of touch screens. The problem is however very challenging for that photos and sketches are inherently modeled in different modalities. Photos are accurate (colored and textured) depictions of the real-world, whereas sketches are highly abstract (black and white) renderings often drawn from human memory. This naturally motivates us to study the effectiveness of various cross-modal retrieval methods in SBIR. However, to the best of our knowledge, all established cross-modal algorithms are designed to traverse the more conventional cross-modal gap of image and text, making their general applicableness to SBIR unclear. In this paper, we design a series of experiments to clearly illustrate circumstances under which cross-modal methods can be best utilized to solve the SBIR problem. More specifically, we choose six state-of-the-art cross-modal subspace learning approaches that were shown to work well on image-text and conduct extensive experiments on a recently released SBIR dataset. Finally, we present detailed comparative analysis of the experimental results and offer insights to benefit future research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.