Abstract
In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious responsibility” of us as persons, versus “the fast-unconscious responsiveness” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both accountable for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.