Abstract
AbstractSome have heralded generative AI models as an opportunity to inform diplomacy and support diplomats’ communication campaigns. Others have argued that generative AI is inherently untrustworthy because it simply manages probabilities and doesn’t consider the truth value of statements. In this article, we examine how AI applications are built to smooth over uncertainty by providing a single answer among multiple possible answers and by presenting information in a tone and form that demands authority. We contrast this with the practices of public diplomacy professionals who must grapple with both epistemic and aleatory uncertainty head on to effectively manage complexities through negotiation. We argue that the rise of generative AI and its “operationalization of truth” invites us to reflect on the possible shortcoming of AI’s application to public diplomacy practices and to recognize how prominent uncertainty is in public diplomacy practices.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.