Abstract Regulation of military ai may take place in three ways. First, existing rules and principles in ihl is already or could be extended via reinterpretation to apply to military ai; second, new ai regulation may appear via “add-ons” to existing rules, finally, regulation of military ai may appear as a completely new framework, either through new state behavior that results in customary international law or through a new legal act or treaty. By introducing this typology, one may identify possible manners of regulation that are presently under-researched and/or ignored, for example how Rules of Engagement (roe) may be a way to control the use of military ai. Expanding on existing scholarship, the articles discusses how military ai may operate under different forms of military command and control systems, how regulation of military ai is not only a question of “means” but also “methods” of warfare and how the doctrine of supervisory responsibility may go beyond the doctrine of command responsibility. In the case that fully-automated Lethal Autonomous Weapons Systems (laws) are available and considered for use, it is suggested that their use should be prohibited in densely populated areas following the same logic as incendiary weapons. Further, one could introduce certain export restrictions on fully-automated laws to prevent proliferation to non-state actors and rogue states.
Read full abstract