Abstract

Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, and in the military, such as Lethal Autonomous Weapon Systems. However, there is a scarcity of in-depth theoretical, methodological, or empirical studies that address these ethical issues, for instance, the impact of morality and where accountability resides in AI robots’ use. To address this dearth, this study offers a conceptual framework that interpretively develops the ethical implications of AI robot applications, drawing on descriptive and normative ethical theory. The new framework elaborates on how the locus of morality (human to AI agency) and moral intensity combine within context-specific AI robot applications, and how this might influence accountability thinking. Our theorization indicates that in situations of escalating AI agency and situational moral intensity, accountability is widely dispersed between actors and institutions. ‘Accountability clusters’ are outlined to illustrate interrelationships between the locus of morality, moral intensity, and accountability and how these invoke different categorical responses: (i) illegal, (ii) immoral, (iii) permissible, and (iv) supererogatory pertaining to using AI robots. These enable discussion of the ethical implications of using AI robots, and associated accountability challenges for a constellation of actors—from designer, individual/organizational users to the normative and regulative approaches of industrial/governmental bodies and intergovernmental regimes.

Highlights

  • Artificial intelligence (AI) robots are intelligent, semiautonomous machines, software and systems that have the increasing ability to formulate decisions in collaboration with humans or on their own, to support humans

  • We suggest two axial themes driving the framework—locus of morality and moral intensity—that combine in unique ways to render specific ‘clusters of accountability’ necessary for AI applications in business (Fig. 1)

  • Cluster 4 represents a cluster of applications, which we describe by strong AI ethical agency, high moral intensity, and the widest dispersal of accountability between actors

Read more

Summary

Introduction

Artificial intelligence (AI) robots are intelligent, semiautonomous machines, software and systems that have the increasing ability to formulate decisions in collaboration with humans or on their own, to support humans. To our best knowledge, applying normative and descriptive ethics to AI robots in business settings is a novel approach, as it enables one to integrate concepts from yet unrelated knowledge domains with concepts pertaining to AI robots and the reflections on their practical matters We use these insights to develop a new framework that incorporates the following constructs: locus of morality (human to AI agency), moral intensity and accountability dispersal, accountability clusters, and the four ethical categories of illegal, immoral, permissible, and supererogatory. The market failures approach to business ethics (Heath, 2014) seeks to formulate normative standards implicit in the basic economic assumptions underlying the market economy’s institutional mechanisms It states that business and innovation require different rules than ordinary morality. Bommer et al (1987) identified different factors such as corporate goals, the juridical system, and religious or societal values that can encourage individual

A New Framework for AI Robot Accountability
Discussion
Limitations and Directions for Future Research
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call