Abstract

In this paper, we propose a multimodal search engine that combines visual and textual cues to retrieve items from a multimedia database aesthetically similar to the query. The goal of our engine is to enable intuitive retrieval of fashion merchandise such as clothes or furniture. Existing search engines treat textual input only as an additional source of information about the query image and do not correspond to the real-life scenario, where the user looks for “the same shirt but of denim”. Our novel method, dubbed DeepStyle, mitigates those shortcomings by using a joint neural network architecture to model contextual dependencies between features of different modalities. We prove the robustness of this approach on two different challenging datasets of fashion items and furniture where our DeepStyle engine outperforms baseline methods by more than 20% on tested datasets. Our search engine is commercially deployed and available through a Web-based application.

Highlights

  • Multimodal search engine allows to retrieve a set of items from a multimedia database according to their similarity to the query in more than one feature spaces, e.g. textual and visual or audiovisual

  • We observe that DeepStyle approach outperforms all baselines for almost all text queries achieving the highest average similarity score

  • It should be noted that network complexity is not directly correlated with its ability to learn style similarity that is illustrated by worse similarity results on Visual-Semantic Embedding (VSE) baseline that extracts Resnet-50 features instead of VGG-19

Read more

Summary

INTRODUCTION

Multimodal search engine allows to retrieve a set of items from a multimedia database according to their similarity to the query in more than one feature spaces, e.g. textual and visual or audiovisual (see Fig. 1). This problem can be divided into smaller subproblems by using separate solutions for each modality. To address the above-mentioned shortcomings of the currently available search engines, we propose a novel end-toend method that uses neural network architecture to model the joint multimodal space of database objects.

RELATED WORK
DEEPSTYLE
EVALUATION METRICS
3) QUANTITATIVE RESULTS
VIII. CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.