Abstract

The limitation on the development of artificial intelligence technologies which well-known developers of these technologies have recently been talking about, is due to fears of existential risks, one of which, according to the theory of the philosopher N. Bostrom, is Superintelligence as a result of the further development of artificial intelligence. Despite the fact that the problem of artificial intelligence has been widely studied in foreign philo-sophical literature, the issue of values underlying new technologies remains practically unstudied. The article substantiates that Bostrom's theory of risks of superintelligence is due to axiological reasons: artificial intelli-gence, according to the philosopher, can pose a threat to transhumanistic values. On the other hand, to prevent a catastrophe, it is these values that are supposed to be the basis for the development of artificial intelligence. The aim of the article is to analyze ethico-philosophical values of transhumanism and their role in the devel-opment of artificial intelligence technologies. It is noted that, in fact, current AI projects are developing precisely in the spirit of the philosophy of transhumanism. The scientific novelty of the article is due to the ethical and philosophical analysis of transhumanistic values, which N. Bostrom proposes to use in the context of the devel-opment of AI. As a result, the ethical meanings of the development of AI are assessed in the aspect of N. Bostrom’s concept of transhumanistic values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call