Abstract

Applications of artificial intelligence (AI) are able to optimise our lives par excellence, and it is clear that this will only increase as time passes. In many ways, this is very promising, but the forms that AI takes in our society have also sparked many concerns about dehumanisation. What is often recognised is that AI systems implicitly exert social power relations—whether intentionally or not, as might be the case with bias—such that the danger would be gone if only we improved our models and uncovered this hidden realm of intentional oppression. However, these views overlook the possibility that detrimental consequences may also arise precisely because AI is able to attain favourable goals flawlessly. This problem of adverse side effects, which are strictly accidental to the goals we set for AI to effectuate, is explored through the notion of “non-intentional dehumanisation”. To articulate this phenomenon, this essay consists of two parts. The first part will establish how naive AI usage presents a paradigmatic case of this problem. In the second part, we will argue that these issues occur in a two-fold fashion; not only does AI risk inducing harm to the “used-upon”, but also to the user. It is with this conceptual model that awareness may be brought to the counter side of our ready acceptance of AI solutions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.