Abstract

Machine learning (ML) algorithms and the artificial intelligence (AI) systems that they enable are powerful technologies that have inspired a lot of excitement, especially within large business and governmental organizations. In an era when increasingly concentrated computing power enables the creation, collection, and storage of “big data,” ML algorithms have the capacity to identify non-intuitive correlations in massive datasets, and as such can theoretically be more efficient and effective than humans at using those correlations to make accurate predictions. However, biases can be encoded in the datasets on which ML algorithms are trained, arising from poor sampling strategies, incomplete or erroneous information, and the social inequalities that exist in the actual world. Additionally, the inherent complexities of ML algorithms that defy explanation even for the most expert practitioners can make it difficult, if not impossible, to identify the root causes of unfair decisions. That same opacity also presents an obstacle for individuals who believe that they have been evaluated unfairly, want to challenge a decision, or try to determine who should—or even ​could​—be held accountable for mistakes. This paper surveys current research in and around ML and AI, drawing primarily from work in computer science, social sciences, and the law. Although it examines material across several contexts, the underlying intention is to consider how insights and lessons from a number of different domains can be applied within consumer financial services. And while there are certainly implications for organizational planning and strategy, the analytical focus rests primarily on the individuals and groups who are impacted directly by AI systems’ decision-making processes. This paper is organized as follows: Section I explores the social contexts with which ML and AI technologies are integrated, and the structural inequalities that influence—and are in turn influenced by—those integrations. Section II surveys ongoing research into data quality, fairness, transparency, and accountability; specific examples of problems that have emerged around these issues; and some of the methods and tools that have been proposed for managing those problems. Finally, the conclusion examines several actual-world cases of ML and AI’s human impacts and the challenges and opportunities posed by algorithmic governance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call