Abstract

Artificial intelligence is nowadays a reality. Setting rules on the potential outcomes of intelligent machines, so that no surprise can be expected by humans from the behavior of those machines, is becoming a priority for policy makers. In its recent Communication “Artificial Intelligence for Europe” (EU Commission 2018), for instance, the European Commission identifies the distinguishing trait of an intelligent machine in the presence of “a certain degree of autonomy” in decision making, in the light of the context. The crucial issue to be addressed is, therefore, whether it is possible to identify a set of rules for data use by intelligent machines so that the decision-making autonomy of machines can allow for humans’ traditional informational self-determination (humans provide machines only with the data they decide to), as enshrined in many existing legal frameworks (including, for personal data protection, the EU’s General Data Protection Regulation) (EU Parliament and Council 2016) and can actually turn out to be further beneficial to individuals. Governing the autonomy of machines can be a very ambitious goal for humans since machines are geared first to the principles of formal logic and then—possibly—to ethical or legal principles. This introduces an unprecedented degree of complexity in how a norm should be engineered, which requires, in turn, an in-depth reflection in order to prevent conflicts between the legal and ethical principles underlying humans’ civil coexistence and the rules of formal logic upon which the functioning of machines is based (EU Parliament 2017).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call