Abstract

In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.

Highlights

  • Predictive, classifying, and profiling algorithms of a wide range of complexity—from decision trees to deep neural networks—are increasingly impacting our lives as individuals and societies

  • Our intention is to demonstrate how artificial intelligence (AI) can be designed for values that are core to societies within the European Union (EU)

  • The paper proceeds as follows: first we provide a brief introduction to Design for Values with an overview of stakeholder engagement methods from Value Sensitive Design and Participatory Design; we present our contribution, adopting fundamental human rights as toplevel requirements that will guide the design process and demonstrate the implications for a range of AI application contexts and key stakeholder considerations; we discuss future steps needed to implement our roadmap in practice

Read more

Summary

Introduction

Predictive, classifying, and profiling algorithms of a wide range of complexity—from decision trees to deep neural networks—are increasingly impacting our lives as individuals and societies. Calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness Bridging these socio-technical gaps is essential for designing algorithms and AI that address stakeholder needs consistent with human rights. The paper proceeds as follows: first we provide a brief introduction to Design for Values with an overview of stakeholder engagement methods from Value Sensitive Design and Participatory Design; we present our contribution, adopting fundamental human rights as toplevel requirements that will guide the design process and demonstrate the implications for a range of AI application contexts and key stakeholder considerations; we discuss future steps needed to implement our roadmap in practice

Design for Values
Tripartite methodology
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call