Abstract

The article discusses the problem of determining the methodological and conceptual foundations of the artificial intelligence ethics. It is shown that the principled approach is based on the theory of value embedding, which assumes that technical objects can either be the carriers of values themselves, or at least contribute to the realization of certain values. At the same time, it is highly dependent on stakeholders, and it rather declares ethics than ensures it. The person-centered approach is based on the idea of personal moral responsibility. The main problems of the personality-oriented approach are the gap in responsibility and the unpredictability of the actions of artificial intelligence. A critical approach is proposed, according to which the subject of artificial intelligence ethics is the impact of technology on people's ideas and values, their behavior and decision-making. The work introduces and discusses the concept of the scale paradox, resulting from the artificial intelligence use. This concept states that many ethically correct cases of using technology can lead to ethically unacceptable consequences. It is shown that one of the options for applying a critical approach can be the study of attitudes and stereotypes associated with artificial intelligence in the mass consciousness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.