Abstract

ABSTRACT This article argues that artificial intelligence (AI) enabled capabilities cannot effectively or reliably compliment (let alone replace) the role of humans in understanding and apprehending the strategic environment to make predictions and judgments that inform strategic decisions. Furthermore, the rapid diffusion of and growing dependency on AI technology at all levels of warfare will have strategic consequences that counterintuitively increase the importance of human involvement in these tasks. Therefore, restricting the use of AI technology to automate decision-making tasks at a tactical level will do little to contain or control the effects of this synthesis at a strategic level of warfare. The article re-visits John Boyd’s observation-orientation-decision-action metaphorical decision-making cycle (or “OODA loop”) to advance an epistemological critique of AI-enabled capabilities (especially machine learning approaches) to augment command-and-control decision-making processes. In particular, the article draws insights from Boyd’s emphasis on “orientation” as a schema to elucidate the role of human cognition (perception, emotion, and heuristics) in defense planning in a non-linear world characterized by complexity, novelty, and uncertainty. It also engages with the Clausewitzian notion of “military genius” – and its role in “mission command” – human cognition, systems, and evolution theory to consider the strategic implications of automating the OODA loop.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call