Abstract

Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. But realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, developing and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave “ethically”. But “behaving ethically” means different things to different people. How should organizations using AI—and the AI itself—handle ethical dilemmas where humans cannot agree on the right course of action? Though a variety of ethical frameworks have been suggested, these approaches do not adequately answer how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements that people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization’s use of AI and AI decisions as ethical to the extent that those decisions resonate with the person’s moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to address individual moral differences in the development, application, and evaluation of AI systems. Our approach addresses several unresolved issues existing in the current state of the ethical AI literature. It suggests potential solutions to handling conflict in moral judgment and highlights future directions to augment ethical AI frameworks by incorporating individual differences in moral judgment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call