Abstract

Are computers on the way to acquiring “superintelligence”? Can human deliberation and decision-making be fully simulated by the mechanical execution of AI programmes? On close examination these expectations turn out not to be well-founded, since algorithms (or, in Kantian terms, “imperatives of skill” that are implemented by technological means) do, ultimately, have “heteronomous” characteristics. So-called AI-“autonomy” is a sensor-directed performance automatism, which — compared with the potential for ethical judgment in human “practical reason” — proves to be limited in significant ways (even if, in so-called “machine learning”, digital technologies are able to probabilistically adapt to new data). This is shown in some detail with reference to the idea of a “digital humanism”, which was introduced by Julian Nida-Rümelin and Nathalie Weidenfeld, who argue that algorithms (possibly) are useful “tools”, but emphasise — thus rejecting excessive “post-humanist” (Utopian or dystopian) ideas about AI — that there exists a crucial difference between human action and its (partial) AI-simulation. While Nida-Rümelin/Weidenfeld´s “digital humanism” is, on the one hand, inspired by Kant’s conception of human autonomous self-determination, the concept of “structural rationality” that they advocate is, on the other hand, quite problematic. “Digital humanism”, however, can be improved as I argue — with reference to Barbara Herman’s analysis of “moral judgment” and to Allen Wood’s reflections on “human dignity”.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call