Frontline witnessing and civic journalism are impacted by the rhetoric and the reality of misinformation and disinformation. This essay highlights key insights from activities of the human rights and civic journalism network WITNESS, as they seek to prepare for new forms of media manipulation, such as deepfakes, and to ensure that an emergent “authenticity infrastructure” is in place to respond to global needs for reliable information without creating additional harms. Based on global consultations on perceived threats and prioritized solutions, their efforts are primarily targeted towards synthetic media and deepfakes, which not only facilitate audiovisual falsification (including non-consensual sexual images) but also, by being embedded in societal dynamics of surveillance and civil society suppression, they challenge real footage and so undermine the credibility of civic media and frontline witnessing (also known as “liar’s dividend”). They do this within a global context where journalists and some distant witness investigators self-identify as lacking relevant skills and capacity, and face inequity in access to detection technologies. Within this context, “authenticity infrastructure” tracks media provenance, integrity, and manipulation from camera to edit to distribution, and so comes to provide “verification subsidies” that enable distant witnesses to properly interpret eye-witness footage. This “authenticity infrastructure” and related tools are rapidly moving from niche to mainstream in the form of initiatives the Content Authenticity Initiative and Coalition for Content Authenticity and Provenance, raising key questions about who participates in the production and dissemination of audiovisual information, under what circumstances and to which effect for whom. Provenance risks being weaponized unless key concerns are integrated into infrastructure proposals and implementation. Data may be used against vulnerable witnesses, or the absence of a trail, for legitimate privacy and technological access reasons, used to undermine credibility. Regulatory and extra-legal co-option are also a fear as securitized “fake news” laws proliferate. The investigation of both phenomena, deepfakes and emergent authenticity infrastructure(s), this paper argues, is important as it highlights the risks related both to the “information disorder” of deepfakes as they challenge the credibility and safety of frontline witnesses and to responses to such “disorder,” as they risk worsening inequities in access to tools for mitigation or increasing exposure to harms from technology infrastructure.
Read full abstract