Abstract This article considers the medial logics of American terrorist watchlist screening in order to study the ways in which digital inequities result from specific computational parameters. Central in its analysis is Secure Flight, an automated prescreening program run by the Transportation Security Administration (TSA) that identifies lowand high-risk airline passengers through name-matching algorithms. Considering Secure Flight through the framework of biopolitics, this article examines how passenger information is aggregated, assessed and scored in order to construct racialised assemblages of passengers that reify discourses of American exceptionalism. Racialisation here is neither a consequence of big data nor a motivating force behind the production of risk-assessment programs. Both positions would maintain that discrimination is simply an effect of an information management system that considers privacy as its ultimate goal, which is easily mitigated with more accurate algorithms. Not simply emerging as an effect of discriminatory practices at airport security, racialisation formats the specific techniques embedded in terrorist watchlist matching, in particular the strategies used to transliterate names across different script systems. I argue thus that the biopolitical production of racialised assemblages forms the ground zero of Secure Flight’s computational parameters, as well as its claims to accuracy. This article concludes by proposing a move away from the call to solve digital inequities with more precise algorithms in order to carefully interrogate the forms of power complicit in the production and use of big data analytics.