Abstract
Influence operations in cyberspace have raised questions about how narratives are strategically disseminated and circulated online, and in particular, whether state-of-the-art machine learning (ML) techniques for narrative understanding and automated narrative generation are or will soon become part of adversarial nation-states’ arsenal. However, until we clarify some fundamental ambiguities surrounding narratives in cyberspace, we cannot accurately assess the threat of ML. For instance, how do we define “narrative” in a way that makes sense across all the various organizations and academic disciplines that must be involved in defending against disinformation? What blind spots in our shared lexicon have stymied the United States’ response to disinformation attacks so far? This paper aims to take a step towards alleviating the confusion by analyzing the usage of some common terminology, summarizing some recent contributions in the realm of disinformation, positing a new systems-dynamic model of narrative in cyberspace, and demonstrating the use of the model in the context of an existing case study in disinformation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.