The contemporary digital reality is inconceivable without artificial intelligence (AI), which has become disseminated across all cultural practices, from scientific and artistic endeavors to everyday activities. AI increasingly functions as an agent of communication and decision-making, gradually surpassing human capabilities across nearly all competencies. The information flows of this new reality can only be navigated through hybrid systems based on post-critical rationality, which inherently introduces an irreducible element of uncertainty and risk in human-machine environments. The article proposes examining the techno-subject through the lens of activity theory and the multiple types of rationality it generates. This framework facilitates the analysis of sociocultural and anthropological implications arising from AI’s integration with human domains, while addressing the existential challenges inherent in constructing a harmonious hybrid society. Beyond V.S. Stepin’s types of scientific rationality, the author builds upon previously introduced forms of rationality: post-critical, object-oriented, instrumental, subjective, results-oriented, creative, and autopoietic. This theoretical framework facilitates a substantive discussion of various manifestations of AI subjectivity, including its generalized embodiment and creative specificity. The demarcation of dominance domains between natural intelligence and AI in the intellectual sphere is proposed to be resolved on the basis of their heuristic potentials. The author maintains that natural intelligence invariably possesses superior capacity in this regard. The article examines approaches to risk assessment in AI implementation strategies, focusing on criteria for preserving anthropological and sociocultural profiles in the development of hybrid society. Advancing the concept of friendly AI is substantiated as essential, with consideration given not only to technological but also to anthropological aspects of human–machine interaction. The author advocates for the development of social examination institutions as regulatory mechanisms for natural–artificial intelligence interaction and anthropological–technological subject interfaces.
Read full abstract