Abstract
This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, to computational tasks and interface design, to institutional and societal considerations, various aspects related to sociotechnical systems must be accounted for in order to get the full picture of potential effects of AI systems on human autonomy. It is clear how human agents can, for example, via coercion or manipulation, hinder each other’s autonomy, or how they can respect each other’s autonomy. AI systems can promote or hinder human autonomy, but can they literally respect or disrespect a person’s autonomy? We argue for a philosophical view according to which AI systems—while not moral agents or bearers of duties, and unable to literally respect or disrespect—are governed by so-called “ought-to-be norms.” This explains the normativity at stake with AI systems. The responsible people (designers, users, etc.) have duties and ought-to-do norms, which correspond to these ought-to-be norms.
Highlights
It is relatively clear that AI technology can make a difference to the conditions of human autonomy, and it would be surprising if the difference it makes could not be negative or positive
What we suggest here is that AI systems are in this respect like hearts or clocks, and in addition to being designed for specific tasks, for them to be ethically acceptable; they ought to be such that they are not obstacles to human autonomy
This article has examined the sociotechnical bases of human autonomy
Summary
It is relatively clear that AI technology can make a difference to the conditions of human autonomy, and it would be surprising if the difference it makes could not be negative or positive. In algorithmic and digitally governed contexts, respect (or quasi-respect) for human autonomy can be a matter of the intrinsic functioning or specific functionalities of sociotechnical systems, whereas the indirect promotion or hindering concerns the unintended effects of the widespread use of technologies The former matters relate to, for example, the meaningfulness of consent, to alternatives available to individuals, to the information provided to them, and to the control individuals have over their data, and the way it is used. Any quasi-intentional priming should be such that it can, when asked, be openly declared, known, and accepted System A* ought to be such that it does not interfere with B’s decision regarding what is best for B System A* should support human agents’ autonomy and positive relations to self, and discourage deference System A* should not “send a message” that B is not capable of, or possess the right to, selfdetermination System A* should allow for B to be regarded in light of the particular self-understandings they have autonomously self-defined. Such prerequisites might include legislation that governs data access, collection, and management, as well as material prerequisites for the effective exercise of informational self-determination, such as access to technology
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.