Abstract

Australia has prioritised the development of robotics, AI and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and methods of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines and frameworks in security and defence. While Australia has not adopted an AI governance framework specifically for the Australian Defence Organisation (ADO), Defence Science and Technology Group (DSTG) has published “A Method for Ethical AI in Defence” (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI. Australia can play a leadership role by integrating legal and ethical considerations into its ADO AI capability acquisition process. This requires a policy framework that defines its legal and ethical requirements, informed by defence and industry stakeholders, and provides a practical methodology to integrate legal and ethical risk mitigation strategies into the acquisition process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call