Abstract

As AI-based systems become commonplace in our daily lives, they need to provide understandable information to their users about how they collect, process, and output information that concerns them. The importance of such transparency practices has gained significance due to recent ethical guidelines and regulation, as well as research suggesting a positive relationship between the transparency of AI-based systems and users’ satisfaction. This article provides a new tool for the design and study of transparency in AI-based systems that use personalization. The tool, called Transparency-Check, is based on a checklist of questions about transparency in four areas of a system: input (data collection), processing (algorithmic models), output (personalized recommendations), and user control (user feedback mechanisms to adjust elements of the system). Transparency-Check can be used by researchers, designers, and end users of computer systems. To demonstrate the usefulness of Transparency-Check from a researcher’s perspective, we collected the responses of 108 student participants who used the transparency checklist to rate five popular real-world systems (Amazon, Facebook, Netflix, Spotify, and YouTube). Based on users’ subjective evaluations, the systems showed low compliance with transparency standards, with some nuances about individual categories (specifically data collection, processing, and user control). We use these results to compile design recommendations for improving transparency in AI-based systems, such as integrating information about the system’s behavior during the user’s interactions with it.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call