ABSTRACT OpenAI’s ChatGPT is a large language model (LLM) that excels at generating text and public controversy. Upon its release, many marveled at its ability to author intelligible and generically responsible texts (Herman). Writing about his students’ experiences using artificial intelligence (AI) writing assistants, S. Scott Graham remarks that the results were “consistently mediocre—and usually quite obvious in their fabrication.” Why might this be true? How can an LLM succeed in some respects and fail in others? We argue that the discrepant reactions to human and AI rhetoric are a question of genre, specifically that AI rhetoric is only generic; AI rhetoric represents a new enactment of “writing degree zero” (Barthes) that is disengaged from immediate rhetorical situations and knowledge bases. AI text generators (currently) have a more difficult time simulating the positioned perspectives that human writers bring to situations and communicate to audiences through their genre usage. Drawing on the work of Bakhtin, we treat this problem as a question of generic form and audience addressivity. We describe the interplay of form and addressivity as genre signaling and offer it as a construct for the analysis of AI rhetoric and genre as a cultural form (Miller). Genre signaling (Hart-Davidson and Omizo) describes a feature of communicative behavior as it occurs over time that can help both humans and machines evaluate written discourse as it exhibits certain stabilized formal features. When texts contain specific genre signals at expected frequencies and intensities, it may be recognized as being generally accurate, reliable, trustworthy. Without these signals, a text with a similar topical focus might fail to be taken as credible or useful. In this essay we propose to quantify genre signaling based on three measures: (1) stability, (2) frequency, and (3) periodicity.