Abstract
This study examines the gender discrimination of Artificial Intelligence (AI) used in the legal system, focusing on risk assessment, facial recognition, and decision-making and decision-support tools. The study delves into the use of AI in the legal system, examining how its reliance on historical data, under/over-representation, and homogeneity of development teams perpetuate existing gender biases. The study then analyses the implications of the United Kingdom General Data Protection Regulation (UK GDPR) and the proposed Data Protection and Digital Information (DPPI) Bill in addressing gender biases in AI. Nevertheless, the study finds the need for a more robust and proactive legal framework that addresses the root causes of these biases in the design and implementation of AI systems. The paper concludes by proposing a framework to effectively address gender bias in AI systems used in the legal system. The framework outlines explicit obligations across policymakers, companies, and end users to ensure the development and deployment of bias-free AI systems. Its role is to provide comprehensive guidelines and oversight mechanisms that promote proactive measures to prevent gender bias. The framework aims to create a more equitable legal environment for everyone.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have