Abstract

AbstractTransparency has long been held up as the solution to the societal harms caused by digital platforms' use of algorithms. However, what transparency means, how to create meaningful transparency, and what behaviors can be altered through transparency are all ambiguous legal and policy questions. This paper argues for beginning with clarifying the desired outcome (the “why”) before focusing on transparency processes and tactics (the “how”). Moving beyond analyses of the ways algorithms impact human lives, this research articulates an approach that tests and implements the right set of transparency tactics aligned to specific predefined behavioral outcomes we want to see on digital platforms. To elaborate on this approach, three specific desirable behavioral outcomes are highlighted, to which potential transparency tactics are then mapped. No single set of transparency tactics can solve all the harms possible from digital platforms, making such an outcomes‐focused transparency tactic selection approach the best suited to the constantly‐evolving nature of algorithms, digital platforms, and our societies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call