In this article, we investigate the previously unstudied reasons why companies use prohibited artificial intelligence (AI) applications and are punished with large fines under the European General Data Protection Regulation (GDPR). Investigating and understanding why companies engage in such severe AI ethical malpractices is essential to correct them as they seriously jeopardize the future development of AI. Based on a sample of 34 companies, out of which 23 were sanctioned under GDPR due to severe violations of AI ethical principles and using fuzzy-set qualitative comparative analysis, this study demonstrates that to be an AI ethical company and comply with the GDPR regulation, it is necessary to have an AI ethical statement, have a very strong concern for information cybersecurity, and have to be based in a country considered ethical or monitor with care the behavior of the firm in countries with lower ethical standards. As a theoretical contribution, although some authors argue that AI ethical principles are useless, the present research introduces a new theoretical perspective by demonstrating cause–effect relationships between business configurations and AI ethical failures. In addition, the results have an important managerial relevance because they show that lack of an AI ethical statement is the most relevant condition that led to ethical misbehaviors.
Read full abstract