Abstract

Many important decisions are increasingly being made with the help of information systems that use artificial intelligence and machine learning models. These computational models are designed to discover useful patterns from large amounts of data, which augment human capabilities to make decisions in various application domains. However, there are growing concerns regarding the ethics challenges that these augmented decision making (ADM) models are facing, most notably on the issue of “algorithmic bias”, where the models systematically produce less favorable (i.e., unfair) decisions for certain groups of people. In this paper, we argue that algorithmic bias is not just a technical problem, and its successful resolution requires deep insights into human behavior and economic incentives. We discuss a human-centric, fairness-aware ADM pipeline which highlights the strategic roles played by human decision makers. In each step of the ADM pipeline, we review the emerging literature on fairness-aware machine learning, and then discuss strategic decisions that humans need to make, such as selecting proper fairness objectives, interpreting machine learning model outputs, and recognizing fairness-induced tradeoffs and implications. Our discussion reveals a number of future research opportunities that are uniquely suitable for Information Systems researchers to pursue.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call