Abstract

A considerable number of studies have been devoted over the past years, to stress risks, threats and challenges brought on by the breath-taking advancements of technology in the fields of artificial intelligence (AI), and robotics. The intent of this chapter is to address this set of risks, threats, and challenges, from a threefold legal perspective. First, the focus is on the aim of the law to govern the process of technological innovation, and the different ways or techniques to attain that aim. Second, attention is drawn to matters of legal responsibility, especially in the civilian sector, by taking into account methods of accident control that either cut back on the scale of the activity via, e.g., strict liability rules, or aim to prevent such activities through the precautionary principle. Third, the focus here is on the risk of legislation that may hinder research in AI and robotics. Since there are several applications that can provide services useful to the well-being of humans, the aim should be to prevent this threat of legislators making individuals think twice before using or producing AI and robots. The overall idea is to flesh out specific secondary legal rules that should allow us to understand what kind of primary legal rules we may need. More particularly, the creation of legally de-regulated, or special, zones for AI and robotics appears a smart way to overcome current deadlocks of the law and to further theoretical frameworks with which we should better appreciate the space of potential systems that avoid undesirable behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call