Abstract

How would you search for a unique, flamboyant shoe that a friend wore and you want to buy? What if you did not take a picture? Existing approaches propose interactive image search, but they either entrust the user with taking the initiative to provide informative feedback, or give all control to the system which determines informative questions to ask. Instead, we propose a mixed-initiative framework where both the user and system can be active participants, depending on whose input will be more beneficial for obtaining high-quality search results. We develop a reinforcement learning approach which dynamically decides which of four interaction opportunities to give to the user: drawing a sketch, marking images as relevant or not, providing free-form attribute feedback, or answering attribute-based questions. By allowing these four options, our system optimizes both the informativeness of feedback, and the ability of the user to explore the data, allowing faster image retrieval. We outperform five baselines on three datasets under extensive settings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.