Abstract

Although real-time dynamic recommender systems have been applied successfully by e-commerce and technology companies for more than a decade, we at IKEA Group have just started our journey into this exciting field. At IKEA, customer experience is at our heart, and a key principle for any machine learning algorithm that we design to improve this experience is that it should act as an extension to the home-furnishing expertise that our co-workers have developed and fine-tuned for more than 75 years. In this talk, we discuss a particular recommendation strategy that projects the inspirational shopping experience of our blue boxes onto our digital touch points by defining a notion of style from our vast collection of inspirational content. To go beyond classical, transaction-based collaborative filtering strategies, we take as our starting point the different types of images taken of each product when launched. Our current implementation relies on the following 3 types of images: (1) white-canvas, referring to an image of a product displayed on a plain white background; (2) context-based, which shows a product in the larger context of a room, but where emphasis remains on the product itself; (3) inspirational, in which a product is shown in a purposefully atmospheric setting with focus on the entirety. By extracting the product range displayed in our tagged inspirational images, we initially construct a graph of products that embeds the mindset of our talented designers. Add-to-cart recommendations are then generated from the resulting graph based on user-behaviour data collected from our digital touch points (app, web) and transactional data from purchases made online, or in one of our IKEA stores. To implement the strategy, we have come across a few interesting (stand-alone) problems along the way; notably, we faced a severe lack of properly tagged inspirational images, and much of our furniture today does not appear in our inspirational collection. To circumvent the latter observation, we pursue a supervised learning approach that automatically identifies products that 1) complement each other with regards to function, and 2) match in terms of style. We do this by taking product metadata attributes and the full collection of product images as input. We also discuss how we use a combination of features extracted from context-based and inspirational images using a pre-trained ImageNet model [2], together with manually tagged inspirational images and transaction data from stores to create our training data. The use of both context-based and inspirational images distinguishes us from similar methodologies in the fashion industry [1, 3] and enables us to capture the notion of complementary products in a satisfying way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call