Abstract

Increasingly, people define and express themselves in online social networks, such as Facebook and Instagram, by uploading photos showing the clothes they wear. As a result, such online social networks are becoming major sources of inspiration, with users looking for others with similar clothing style. In this paper, we propose a novel learning to rank (L2R) algorithm for finding similar apparel style given a query image. L2R algorithms use a labeled training set to generate a ranking model that can later be used to rank new query results. These training sets, however, are costly and laborious to produce, requiring human annotators to assess the relevance of candidate images in relation to a query. Active learning algorithms are able to reduce the labeling effort by selectively sampling an unlabeled set of images and choosing the subset that maximizes a learning function's effectiveness. Specifically, our proposed L2R algorithm employs an association rule active sampling algorithm to select very small but effective training sets. Further, our algorithm operates on visual (e.g., image descriptors) and textual (e.g., comments associated with the image) elements, in a way that makes it able (i) to expand the query image (for which only visual elements are available) with textual elements, and (ii) to combine multiple elements, being visual or textual, using basic economic efficiency concepts. We conducted a systematic evaluation of the proposed algorithm using every-day photos collected from Instagram, and we show that our L2R algorithm reduces by two orders of magnitude the need for labeled images, and still improves upon the state-of-the-art models by 4-8% in terms of mean average precision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call