Large language models (LLMs) and dialogue agents represent a significant shift in artificial intelligence (AI) research, particularly with the recent release of the GPT family of models. ChatGPT's generative capabilities and versatility across technical and creative domains led to its widespread adoption, marking a departure from more limited deployments of previous AI systems. While society grapples with the emerging cultural impacts of this new societal-scale technology, critiques of ChatGPT's impact within machine learning research communities have coalesced around its performance or other conventional safety evaluations relating to bias, toxicity, and “hallucination.” We argue that these critiques draw heavily on a particular conceptualization of the “human-centered” framework, which tends to cast atomized individuals as the key recipients of technology's benefits and detriments. In this article, we direct attention to another dimension of LLMs and dialogue agents’ impact: their effects on social groups, institutions, and accompanying norms and practices. By analyzing ChatGPT's social impact through a social-centered framework, we challenge individualistic approaches in AI development and contribute to ongoing debates around the ethical and responsible deployment of AI systems. We hope this effort will call attention to more comprehensive and longitudinal evaluation tools (e.g., including more ethnographic analyses and participatory approaches) and compel technologists to complement human-centered thinking with social-centered approaches.