Abstract

ABSTRACT This article serves as a warning. Prompted by the likely increase in the reliance on artificial intelligence (AI) in intelligence analysis, it raises grave concerns about the prospect of relying on large language models (LLMs), including in high-stakes contexts such as the state-level resort-to-force decision making that is the focus of this special issue. It begins by identifying the twin informational pathologies that intelligence analysis is subject to by its very nature: information scarcity and epistemic scarcity. It goes on to argue that the use of LLMs would compound these pathologies, attributing this risk to the nature of the international information landscape, especially the rise of private actors in data markets and the changed intelligence environment in the years following September 11. The article concludes by making recommendations for possible responses to the informational risks engendered by the use of LLMs in intelligence analysis, especially in high-stakes contexts.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.