Abstract
From the perspective of virtue ethics, this paper points out that Artificial Intelligence becomes more and more like an ethic subject which can take responsibility with its improvement of autonomy and sensitivity. This paper intends to point out that it will produce many problems to tackle the questions of ethics of Artificial Intelligence through programming the codes of abstract moral principle. It is at first a social integration question rather than a technical question when we talk about the question of AI’s ethics. From the perspective of historical and social premises of ethics, in what kind of degree Artificial Intelligence can share the same ethics system with human equals to the degree of its integration into the narrative of human’s society. And this is also a process of establishing a common social cooperation system between human and Artificial Intelligence. Furthermore, self-consciousness and responsibility are also social conceptions that established by recognition, and the Artificial Intelligence’s identity for its individual social role is also established in the process of integration.
Highlights
Imagine that the human world was to suffer from a war with artificial intelligence (AI), and they compromise after a long time
How can we give a set of human moral laws to AI by programming without permission from AI? Is it fair to give the set of human moral laws—— just as the Isaac Asimov's three laws of robotics ——to AI? The viewpoint of this paper is that answers to these questions are depended on the degree of the AI robot’s involvement into human society
The viewpoint that we can solve the ethical problem of AI by programming an abstract moral module has many theory problems
Summary
Imagine that the human world was to suffer from a war with AI, and they compromise after a long time. In the field of ethics, the question becomes more complex. It is obvious in the framework of virtue ethics to admit that all kinds of virtues have its own premises which are decided by social conditions and history conditions. Karl Max and Macintyre may have different opinions on the status of the contradictions between economic and superstructure, but they all convey the same opinion that we cannot think of a kind of system of ethics without its special social and history conditions. We cannot think of the social and history conditions of ethics without considering humanity. How can we give a set of human moral laws to AI by programming without permission from AI? I want to prove the opinion with the basic theory structure from the virtue ethics of Macintyre
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.