Abstract

Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Bateson’s core theories of ecology of mind, schismogenesis, and double bind, are hereby revisited, taken out of their respective sociological, anthropological, and psychotherapeutic contexts and recontextualized in the field of Roboethics as to a twofold aim: (a) the proposal of a rigid ethical standpoint toward both artificial and non-artificial agents, and (b) an explanatory analysis of the reasons bringing about such a polarized outcome of contradictory views in regard to the future of robots. Firstly, the paper applies the Batesonian ecology of mind for constructing a unified roboethical framework which endorses a flat ontology embracing multiple forms of agency, borrowing elements from Floridi’s information ethics, classic virtue ethics, Felix Guattari’s ecosophy, Braidotti’s posthumanism, and the Japanese animist doctrine of Rinri. The proposed framework wishes to act as a pragmatic solution to the endless dispute regarding the nature of consciousness or the natural/artificial dichotomy and as a further argumentation against the recognition of future artificial agency as a potential existential threat. Secondly, schismogenic analysis is employed to describe the emergence of the hostile human–robot cultural contact, tracing its origins in the early scientific discourse of man–machine symbiosis up to the contemporary countermeasures against superintelligent agents. Thirdly, Bateson’s double bind theory is utilized as an analytic methodological tool of humanity’s collective agency, leading to the hypothesis of collective schizophrenic symptomatology, due to the constancy and intensity of confronting messages emitted by either proponents or opponents of artificial intelligence. The double bind’s treatment is the mirroring “therapeutic double bind,” and the article concludes in proposing the conceptual pragmatic imperative necessary for such a condition to follow: humanity’s conscience of habitualizing danger and familiarization with its possible future extinction, as the result of a progressive blurrification between natural and artificial agency, succeeded by a totally non-organic intelligent form of agency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call