Abstract

Recent works in computer vision and multimedia have shown that image memorability can be automatically inferred exploiting powerful deep-learning models. This article advances the state of the art in this area by addressing a novel and more challenging issue: “ Given an arbitrary input image, can we make it more memorable? ” To tackle this problem, we introduce an approach based on an editing-by-applying-filters paradigm: given an input image, we propose to automatically retrieve a set of “style seeds,” i.e., a set of style images that, applied to the input image through a neural style transfer algorithm, provide the highest increase in memorability. We show the effectiveness of the proposed approach with experiments on the publicly available LaMem dataset, performing both a quantitative evaluation and a user study. To demonstrate the flexibility of the proposed framework, we also analyze the impact of different implementation choices, such as using different state-of-the-art neural style transfer methods. Finally, we show several qualitative results to provide additional insights on the link between image style and memorability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call