Abstract

Automated decision-making systems are commonly used by human resources to automate recruitment decisions. Most automated decision-making systems utilize machine learning to screen, assess, and give recommendations on candidates. Algorithmic bias and prejudice are common side-effects of these technologies that result in data-driven discrimination. However, proof of this is often unavailable due to the statistical complexities and operational opacities of machine learning, which interferes with the abilities of complainants to meet the requisite causal requirements of the EU equality directives. In direct discrimination, the use of machine learning prevents complainants from demonstrating a prima facie case. In indirect discrimination, the problems mainly manifest once the burden has shifted to the respondent, and causation operates as a quasi-defence by reference to objectively justified factors unrelated to the discrimination. This paper argues that causation must be understood as an informational challenge that can be addressed in three ways. First, through the fundamental rights lens of the EU Charter of Fundamental Rights. Second, through data protection measures such as the General Data Protection Regulation. Third, the article also considers the future liabilities that may arise under incoming legislation such as the Artificial Intelligence Act and the Artificial Intelligence Liability Directive proposal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call