Abstract

Fashion compatibility predictions have obtained a lot of attention recently. Mining the compatibility between fashion items in an outfit is different from learning the visual similarity, since this relationship is more delicate. Decomposing the outfit compatibility into pairwise item matching is a popular way to treat the problem. However, in most existing methods, the items are matched without considering the context, i.e, the remaining items in the outfit. Recent efforts have been made to learn the underlying high order relationships among items by treating the outfit as a whole. These models could be sensitive to the properties of different datasets, and the item representations in these models are not as compact as those in the pairwise models. In this paper, we propose a context conditioning embedding approach to learn compact representations that preserve the shared information among items under the existence of contextual items. We use two different spaces, the general and the contextual spaces, to embed items, where the representation in the contextual space contains information from the context. We employ mutual information maximization for model learning, which is shown to be more appropriate for the problem. With extensive experiments, we show that our model achieves superior performance than other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call