In today’s digital landscape, cultural content such as texts, films, images, and recorded sounds are increasingly subjected to automatic (or semi-automatic) processes of identification and classification. On a daily basis, spam filters scan swaths of emails in order to separate legit and illegitimate textual messages (Brunton, 2013), algorithms analyze years of user-uploaded film on YouTube in search for copyright violations (Heuguet, 2019), and software systems are deployed to scrutinize millions of images on social media sites in order to detect sexually offensive content (Liao, 2018). These examples reveal how machines and algorithmic systems are increasingly utilized to make complex judgments regarding cultural content. Indeed, it could be argued that the wide-ranging adoption of content identification systems is constructing new ontologies of culture and regimes of truth in the online domain. When put to action, content identification systems are trusted with the ability to separate good/bad and legal/illegal forms of communication and used to secure the singularity, value, authenticity, origin, and ownership of content. Such efforts are deeply embedded in constructions of knowledge, new forms of political governance, and not least global market transactions. Content identification tools now make up an essential part of the online data economy by protecting the interests of rights holders and forwarding the mathematization, objectification, and commodification of cultural productions.