Abstract

Casual users nowadays can create almost arbitrary image content by providing textual prompts to generative machine-learning models. These models rapidly improve image quality with each new generation, providing means to create photos, paintings in different styles, and even videos. One feature of such models is the ability to take an image as input and adjust content according to a prompt. A visual obfuscation of content can be achieved for static images and videos by slightly changing persons, text, and other objects. The potential of this technique can be applied in eye-tracking experiments for post-hoc dissemination of analysis results and visualization. In this work, we discuss how the technique could serve to anonymize stimuli (e.g., for double-blind reviews, remove product placements, etc.) and protect the privacy of people visible in the stimuli. We further investigate how the application of this anonymization process influences visual saliency and the depiction of stimuli in visualization techniques. Our results show that slight image transformations do not drastically change the saliency of a scene but obfuscate objects and faces while keeping important image structures for context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call