Abstract

Artificial Intelligence (AI) holds great potential for the military domain but is also seen as prone to data bias and lacking transparency and explainability. In order to advance the trustworthiness of AI-enabled systems, a dynamic approach to the development, deployment and use of AI systems is required. This approach, when incorporating ethical principles such as lawfulness, traceability, reliability and bias mitigation, is called ‘Responsible AI’. This article describes the challenges of using AI responsibly in the military domain from a human factors and ergonomics perspective. Many of the ironies of automation originally described by Bainbridge still apply in the field of AI, but there are also some unique challenges and requirements that need to be considered, such as a larger emphasis on ethical risk analyses and validation and verification up-front, as well as moral situation awareness during deployment and use of AI in military systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call