The article undertakes the problem of AGI (Artificial General Intelligence) research with reference to Nick Bostrom’s concept of existential risk and Ingmar Persson’s/Julian Savulescu’s proposal of biomedical moral enhancement from a pedagogical-anthropological perspective. A major focus will be put on the absence of pedagogical paradigms within the techno-progressive discourse, which results in a very reduced idea of education and human development. In order to prevent future existential risks, the techno-progressive discourse should at least to some extent refer to the qualitative approaches of humanities. Especially pedagogical anthropology reflects the presupposed and therefore frequently unarticulated images of man within the various scientific disciplines and should hence be recognized as a challenge to the solely quantitative perspective of AGI researches and transhumanism. I will argue that instead of forcing man to adapt physically to artificial devices, as the techno-progressive discourses suggest, the most efficient way of avoiding future existential risks concerning the relationship between mankind and highly advanced technology would be—as John Gray Cox proposes—making AGIs adopt crucial human values, which would integrate their activity into the social interactions of the lifeworld ( Lebenswelt ).