Abstract

The Collingridge dilemma or ‘dilemma of control’ presents a problem at the intersection of law, society and technology. New technologies can still be influenced, whether by regulation or policy, in their early stage of development, but their impact on society remains unpredictable. In contrast, once new technologies have become embedded in society, their implications and consequences are clear, but their development can no longer be affected. Resulting in the great challenge of the pacing problem – how technological development increasingly outpaces the creation of appropriate laws and regulations. My paper examines the problematic entanglement and relationship of Artificial Intelligence (AI) and a key aspect of the rule of law, legal certainty. AI is our modern age’s fastest developing and most important technological advancement, a key driver for global socio-economic development, encompassing a broad spectrum of technologies between simple automation and autonomous decision-making. It has the potential to improve healthcare, transportation, communication and to contribute to climate change mitigation. However, its development carries an equal amount of risk, including opaque decision-making, gender-based or other kinds of discrimination, intrusion into private lives and misuse for criminal purposes. The transformative nature of AI technology impacts and challenges law and policymaking. The paper considers the impact of AI through legal certainty on the rule of law, how it may undermine its various elements, among others foreseeability, comprehensibility and clarity of norms. It does so by elaborating on AI’s potential threat brought on by its opacity (‘black box effect’), complexity, unpredictability and partially autonomous behaviour, which all can impede the effective verification of compliance with and the enforcement of new as well as already existing legal rules in international, European and national systems. My paper offers insight into a human-centric and risk-based approach towards AI, based on consideration of legal and ethical questions surrounding the topic, to help ensure transparency and legal certainty in regulatory interventions for the benefit of optimising efficiency of new technologies as well as protecting the existing safeguards of legal certainty.

Highlights

  • In ­2017, a female robot named Sophia was granted citizenship in Saudi Arabia, arousing great public interest worldwide

  • This paper will use the definition of artificial intelligence adopted by the high Level European Expert group on Artificial Intelligence: “Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take to achieve the given goal

  • The concept of human rights has been based to date on the framework of human dignity: human beings have been vested with an inalienable core content of dignity, which distinguishes them from all other types of entities, as well as providing them with a great number of rights and certain duties

Read more

Summary

Introduction

In ­2017, a female robot named Sophia was granted citizenship in Saudi Arabia, arousing great public interest worldwide This was the first occasion that an artificial intelligence had been accorded the ordinary citizenship of a state, and it raised a number of issues. Several commentators have pointed out that the participation of electronic humanoids in social and economic life would be risky due to the insufficient regulation; the legal framework needs to be updated significantly to diminish such risk factors (Stone, ­2017) As part of these endeavours, European artificial intelligence experts have elaborated human-centric ethical rules for robots, which are primarily targeted at preventing potential harm caused unintentionally by robots created with insufficient technical knowledge.. As part of these endeavours, European artificial intelligence experts have elaborated human-centric ethical rules for robots, which are primarily targeted at preventing potential harm caused unintentionally by robots created with insufficient technical knowledge. the EU member states and Norway concluded an agreement

AI in Law
Declaration
The legal personality of electronic humanoids
The case of Sophia
Robots as citizens
Practical and human rights concerns
Robots as economic actors
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call