Abstract

The unpredictability of artificial intelligence (AI) services and products pose major ethical concerns for multinational companies as evidenced by the prevalence of unfair, biased, and discriminate AI systems. Examples including Amazon’s recruiting tool, Facebook’s biased ads, and racially biased healthcare risk algorithms have raised fundamental questions about what these systems should be used for, the inherent risks they possess, and how they can be mitigated. Unfortunately, these failures not only serve to highlight the lack of regulation in AI development, but it also reveals how organisations are struggling to alleviate the dangers associated with this technology. We argue that to successfully implement ethical AI applications, developers need a deeper understanding of not only the implications of misuse, but also a grounded approach in their conception. Judgement studies were therefore conducted with experts from data science backgrounds who identified six performance areas, resulting in a theoretical framework for the development of ethically aligned AI systems. This framework also reveals that these performance areas require specific mechanisms which must be acted upon to ensure that an AI system implements and meets ethical requirements throughout its lifecycle. The findings also outline several constraints which present challenges in the manifestation of these elements. By implementing this framework, organisations can contribute to an elevated trust between technology and people resulting in significant implications for both IS research and practice. This framework will further allow organisations to take a positive and proactive approach in ensuring they are best prepared for the ethical implications associated with the development, deployment and use of AI systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call