Abstract Critics have identified a set of operational flaws in the machine language and deep learning systems now discussed under the “AI” banner. Five of the most discussed are social biases, particularly racism; opacity, such that users cannot assess how results were generated; coercion, in that architectures, datasets, algorithms, and the like are controlled by designers and platforms rather than users; systemic privacy violations; and the absence of academic freedom covering corporation-based research, such that results can be hyped in accordance with business objectives or suppressed and distorted if not. This article focuses on a sixth problem with AI, which is that the term intelligence misstates the actual status and effects of the technologies in question. To help fill the gap in rigorous uses of “intelligence” in public discussion, it analyzes Brian Cantwell Smith's The Promise of Artificial Intelligence (2019), noting humanities disciplines routinely operate with Smith's demanding notion of “genuine intelligence.” To get this notion into circulation among technologists, the article calls for replacement of the Two Cultures hierarchy codified by C. P. Snow in the 1950s with a system in which humanities scholars participate from the start in the construction and evaluation of “AI” research programs on a basis of epistemic equality between qualitative and quantitative disciplines.