The proliferation of Artificial Intelligence (AI) in decision-making contexts is hailed as a silver bullet, pledging to replace human subjectivity with objective, infallible decisions. Paradoxically, considerable journalistic reporting has recently commanded attention to biased and discriminatory attitudes displayed by AI systems on both sides of the Atlantic. Notwithstanding the permeation of automated decision-making in critical settings, such as criminal justice, job recruitment, and border control, wherein rights and freedoms of individuals and groups are likewise imperilled, there is often no way for human agents to untangle how AI systems reach such unacceptable decisions. The conspicuous bias problem of AI alongside its operation as an inexplicable ‘black box’ render the exploration of this phenomenon pressing, primarily in the less examined EU policy arena. This dissertation pursues an interdisciplinary research methodology to examine which are the main ethical and legal challenges that Narrow AI, especially in its data-driven Machine Learning (ML) form, poses in relation to bias and discrimination across the EU. Chapter 1 equips readers with pertinent background information regarding AI and its interdependent ML and Big Data technologies. In an accessible manner, it takes heed of the definitions and types of AI adopted by EU instruments along with the milestones in its historical progression and its current stage of development. Chapter 2 conducts a philosophical analysis to argue against the putative ethical neutrality of AI. Ethical concerns of epistemological nature reveal that biases traverse AI systems through the selection of objectives, training data, the reliance on correlations, and the epistemic inequality between lay individuals and AI developers in combination with that between human agents and ‘black box’ machines in general. Touching upon normative ethical concerns, AI systems entail effects which, according to egalitarianism, oppose normative ideals of fairness and equality. In more Kafkaesque scenarios, individuals and corporations may use technical particularities of AI to mask their discriminatory intent. In Chapter 3, a doctrinal legal methodology is applied to reveal the tensions of these challenging instantiations of AI in light of soft and hard EU law instruments. In consideration of its data-driven character, biased and discriminatory AI decisions fall within the applicability scope of the newly enforced General Data Protection Regulation (GDPR). In particular, the data processing principles of Article 5, the Data Protection Impact Assessments (DPIA) of Article 35, the prohibition of automated decision-making and the speculative right to explanation of Article 22, the principles of lawfulness, fairness, and transparency of Article 5 (1) a), the suggested implementation of auditing, and the enhanced enforcement authorities receive scrutiny. The dissertation concludes that a principles-based approach and the provision of anticipatory impact assessments are regulatory strengths of the GDPR. However, the EU should discourage the deployment of AI in crucial decision-making contexts and explore ways to fill related legal gaps. Overall, Trustworthy AI is proposed as an ethical and legal paragon in the face of biased and discriminatory AI.
Read full abstract