Abstract

Defense organizations are increasingly developing ethical principles to guide the design, development, and use of responsible AI, most notably for defense, security, and intelligence uses. While these principles have the potential to lead to more ethical and responsible uses of AI, they also create a novel issue that we term the challenge of ethical interoperability. In particular, we start with the observation that the frequent necessity for collaboration between defense organizations results in their possible need to use one another’s AI systems, either directly or indirectly. In such cases, the AI system presumably satisfies the originator organization’s ethical principles, but the adopting organization should only use it if it also satisfies their AI principles. One might naturally consider using the operating characteristics of the AI system to establish ethical interoperability, as those parallel the features used for technical interoperability. However, we argue that if the operating characteristics are sufficient to enable such assurance, then they will be too detailed to be disclosed. Instead, we propose a system of self-certification that provides adopting organizations with assurances that a system satisfies their ethical AI principles. We argue that our proposed framework properly aligns incentives to ensure ethical interoperability between defense organizations. Moreover, although this challenge is particularly salient for defense organizations, ethical interoperability must also be addressed in non-defense settings by organizations that have developed ethical AI principles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call