Abstract

The rapid development of artificial intelligence brings numerous benefits, but also poses some risks. On one hand, automated systems are improving business productivity and facilitating our everyday activities, while, on the other hand, there is a problem of unauthorized use of data, asymmetry of information of algorithmic decisions, disrespect of basic human rights, lack of transparency, etc. Numerous legal and ethical dilemmas are opening up, influencing the need for regulation of further development and application of artificial intelligence systems. The main goal of artificial intelligence's regulation is to build an "ecosystem of trust". Strengthening trust in artificial intelligence can only be achieved by engaging all relevant actors in order to develop an adequate legal and ethical framework. In this paper, we will present the existing regulatory models, while addressing the most adequate model for regulation of artificial intelligence. Also, the European regulatory framework will be presented as relevant for Serbia whose national regulations and ethical guidelines should be in line with the EU standards.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call