Abstract

Technological and economic forces are radically restructuring our ecosystem of knowledge, and opening our information space increasingly to forms of digital disruption and manipulation that are scalable, difficult to detect, and corrosive of the trust upon which vigorous scholarship and liberal democratic practice depend. Using an illustrative case from China, this article shows how a determined actor can exploit those vulnerabilities to tamper dynamically with the historical record. Briefly, Chinese knowledge platforms comparable to JSTOR are stealthily redacting their holdings, and globalizing historical narratives that have been sanitized to serve present political purposes. Using qualitative and computational methods, this article documents a sample of that censorship, reverse-engineers the logic behind it, and analyzes its discursive impact. Finally, the article demonstrates that machine learning models can now accurately reproduce the choices made by human censors, and warns that we are on the cusp of a new, algorithmic paradigm of information control and censorship that poses an existential threat to the foundations of all empirically grounded disciplines. At a time of ascendant illiberalism around the world, robust, collective safeguards are urgently required to defend the integrity of our source base, and the knowledge we derive from it.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call