ABSTRACT This article explores the imprecise boundary between Lethal Autonomous Weapons Systems (LAWS) and Human-Machine Teaming – as a subset of Human-Machine Interaction – and the extent both are emerging as a point of concern (and option) in military and security policy debates. As the development of Human-Machine Teaming relates to artificial intelligence (AI) capabilities there also exists an area of concern pertaining to reliability and confidence, particularly in the heat of battle. Also known as Manned-Unmanned Teaming, Human-Machine Teaming attempts to engender trust and collaborative partnerships with robots and algorithms. Clearly the prospect of LAWS in recent times, or so-called ‘killer robots,’ has raised questions relating to the degree such devices can be trusted to select and engage targets without further human intervention. Aside from examining the ‘trust factor,’ the article also considers security threats posed by both state and non-state actors and the complicit yet inadvertent role multinational corporations play in such developments where civilian technology is modified for dual-purposes. The effectiveness of government regulation over AI, including whether AI can be ‘nationalised’ for national security reasons, will also be examined as part of AI non-proliferation.