Amid the optimism of generative Artificial Intelligence (gen-AI) in language education, there remains a weak connection to learning practices. The emergence of gen-AI has preceded considerations of how it should be applied in teaching and learning. However, while gen-AI has been justified in terms of the possibilities to enhance learner agency by expanding opportunities to engage with language, such as through the generation of content or the translation of texts, it can also take power away from learners. How can learners be self-determining in light of how choices become increasingly guided by Artificial Intelligence? In this paper, I conceive the arrangements of humans and software as an assemblage of complex and dynamic social, and technical processes. Drawing on a flat ontology, where all agents (human and non-human, material and subjective) have equal ontological status, I argue that learner agency has its origins in the messy and lively interactions between heterogenous actors. In particular, I consider active and passive affects as being part of the same process: active when we bring something into effect ourselves, passive when our self-determination is changed not by our own power, but through external forces acting on it (such as gen-AI). From this, I explore the constraining and enabling potential of artificial intelligence. Finally, I extend this discussion to the emergence of learner agency.