AI-powered systems pose unknown challenges for designers, policymakers, and users, making it more difficult to assess potential harms and outcomes. Although understanding risks is a requirement for building trust in technology, users are often excluded from risk assessments and explanations in policy and design. To address this issue, we conducted three workshops with 18 participants and discussed the EU AI Act, which is the European proposal for a legal framework for AI regulation. Based on results of these workshops, we propose a user-centered conceptual model with five risk dimensions (Design and Development, Operational, Distributive, Individual, and Societal) that includes 17 key risks. We further identify six criteria for categorizing use cases. Our conceptual model (1) contributes to responsible design discourses by connecting the risk assessment theories with user-centered approaches, and (2) supports designers and policymakers in more strongly considering a user perspective that complements their own expert views.
Read full abstract