Abstract

To resist government and corporate use of facial recognition to surveil users through their personal images, researchers have created privacy-enhancing image filters that use adversarial machine learning. These "subversive AI'' (SAI) image filters aim to defend users from facial recognition by distorting personal images in ways that are barely noticeable to humans but confusing to computer vision algorithms. SAI filters are limited, however, by the lack of rigorous user evaluation that assess their acceptability. We addressed this limitation by creating and validating a scale to measure user acceptance --- the SAIA-8. In a three-step process, we apply a mixed-methods approach that closely adhered to best practices for scale creation and validation in measurement theory. Initially, to understand the factors that influence user acceptance of SAI filter outputs, we interviewed 15 participants. Interviewees disliked extant SAI filter outputs because of a perceived lack of usefulness and conflicts with their desired self-presentation. Using insights and statements from the interviews, we generated 106 potential items for the scale. Employing an iterative refinement and validation process with 245 participants from Prolific, we arrived at the SAIA-8 scale: a set of eight items that capture user acceptability of privacy-enhancing perturbations to personal images, and that can aid in benchmarking and prioritizing user acceptability when developing and evaluating new SAI filters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call