Abstract

In voicing commitments to the principle that the adoption of artificial-intelligence (AI) tools by armed forces should be done responsibly, a growing number of states have referred to a concept of “Responsible AI.” As part of an effort to help develop the substantive contours of that concept in meaningful ways, this position paper introduces a notion of “responsible reliance.” It is submitted that this notion could help the policy conversation expand from its current relatively narrow focus on interactions between an AI system and its end-user to also encompass the wider set of interdependencies involved in fulfilling legal obligations concerning the use of AI in armed conflicts. The authors argue that to respect international humanitarian law and ensure accountability, states ought to devise and maintain a framework that ensures that natural persons involved in the use an AI tool in an armed conflict could responsibly rely at least on: (1) the tool’s technical aspects, (2) the conduct of other people involved in development and use of that AI tool; and (3) the policies and processes implemented at the state level. According to the authors, the “responsible reliance” notion could serve, among other examples, as a basis on which to articulate legal requirements, prohibitions, and permissions across diverse areas, from the design of AI tools to human-machine interactions to configuration of responsible-command frameworks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call