Are You AI’S Favourite? EU Legal Implications of Biased AI Systems in Clinical Genetics and Genomics

  • Abstract
  • Literature Map
  • Citations
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The article provides a legal overview of biased AI systems in clinical genetics and genomics. For the overview, two perspectives to look at bias are taken into consideration: societal and statistical. The paper explores how biases can be defined in these two perspectives and how generally they can be classified. Based on two perspectives, the paper explores three negative consequences of biases in AI systems: discrimination and stigmatization (as the more societal concepts) and inaccuracy of AI’s decisions (more related to the statistical perception of bias). Each of these consequences is analyzed within the frameworks they correspond to. Recognizing inaccuracy as harm caused by biased AI systems is one of the most important contributions of the article. It is argued that once identified, bias in an AI system indicates possible inaccuracy in its outcomes. The article demonstrates it through the analysis of the medical devices framework: if it is applicable to AI applications used in genomics and genetics, how it defines bias, and what are the requirements to prevent them. The paper also looks at how this framework can work together with anti-discrimination and stigmatization rules, especially in the light of the upcoming general legal framework on AI. The authors conclude that all the frameworks shall be considered for fighting against bias in AI systems because they reflect different approaches to the nature of bias and thus provide a broader range of mechanisms to prevent or minimize them.

Similar Papers
  • Research Article
  • 10.1007/s00146-023-01661-w
Ethical AI does not have to be like finding a black cat in a dark room
  • May 4, 2023
  • AI & SOCIETY
  • Apala Lahiri Chavan + 1 more

Ethical AI does not have to be like finding a black cat in a dark room

  • Research Article
  • Cite Count Icon 12
  • 10.1016/j.infoandorg.2023.100498
From worker empowerment to managerial control: The devolution of AI tools' intended positive implementation to their negative consequences
  • Dec 14, 2023
  • Information and Organization
  • Emmanuel Monod + 4 more

From worker empowerment to managerial control: The devolution of AI tools' intended positive implementation to their negative consequences

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.1007/s43681-023-00327-z
When things go wrong: the recall of AI systems as a last resort for ethical and lawful AI
  • Aug 30, 2023
  • AI and Ethics
  • Alessio Tartaro

This paper presents an initial exploration of the concept of AI system recall, primarily understood as a last resort when AI systems violate ethical norms, societal expectations, or legal obligations. The discussion is spurred by recent incidents involving notable AI systems, demonstrating that AI recalls can be a very real necessity. This study delves into the concept of product recall as traditionally understood in industry and explores its potential application to AI systems. Our analysis of this concept is centered around two prominent categories of recall drivers in the AI domain: ethical-social and legal considerations. In terms of ethical-social drivers, we apply the innovative notion of “moral Operational Design Domain”, suggesting AI systems should be recalled when they violate ethical principles and societal expectation. In addition, we also explore the recall of AI systems from a legal perspective, where the recently proposed AI Act provides regulatory measures for recalling AI systems that pose risks to health, safety, and fundamental rights. The paper also underscores the need for further research, especially around defining precise ethical and societal triggers for AI recalls, creating an efficient recall management framework for organizations, and reassessing the fit of traditional product recall models for AI systems within the AI Act's regulatory context. By probing these complex intersections between AI, ethics, and regulation, this work aims to contribute to the development of robust and responsible AI systems while maintaining readiness for failure scenarios.

  • Research Article
  • 10.22545/2024/00244
Epistemology in AI (Transdisciplinary AI)
  • Jan 9, 2024
  • Transdisciplinary Journal of Engineering & Science
  • Ndubuisi Idejiora-Kalu

A critical look at the evolution of AI strongly shows a sustained but stealth race to replace humans with AI. Early scientific literature and discourse on AI for some reasons (either to allow AI gain entry and acceptability in mainstream scientific and technological arena) vehemently deny this "human replacement agenda". This thinking pattern unknowingly shaped current scientific literature, discourse, general understanding of what AI is and its development and applicability (a reductionist thinking). This limits our understanding in both the beneficial and destructive capabilities of AI. But when considering a TD assessment on the developmental dynamic of AI, one would comfortably say and must be bold to admit, that indeed AI intends replacing humans and is on course for fulfilling this.
 
 We see this AI human replacement agenda in intensified R&D efforts dedicated to developing powerful AI system of systems which massively augment human reasoning, most times far better. The inexhaustible list includes the AI replacement of formerly considered human-centric jobs, advanced autonomous weapon systems, killer robots and AI in warfare, intelligent facial recognition, biometric monitoring, integrating AI on biological, nuclear and space-based weapons systems, etc. If this is the direction AI is taking, then a secondary aim would surely arrive at integrating epistemology in AI or "grant spirits" for AI systems. This is because a distinctive characteristic of a human is his spirit and one cannot replace humans with AI without creating proportionate or appropriate spirits for the AI systems. Sooner or later our AI systems would have epistemological functions and possess spirits. The place of the soul for such AI systems would be attained as well. If human knowledge, beliefs, voices, clips and laws can be preserved long after they are gone as is possible in smart digital technologies, then spirit-based AI would indeed cause these humans to live forever. If the feat of a spirit enamored AI is near, then why worry? Indeed when considering that humans possess good and bad spirits (from the epistemology of rational and irrational inertia) then these AI systems would of course have good or bad spirits and be bad or good AI.
 
 Would integrating a spirit into a rule-based or machine learning algorithmic structure of an AI system have benefits? Yes!, profound benefits too. A spirit-based AI would of course make possible the "possession of feelings" by AI systems, a feat unattainable in both algorithmic, operational and inferential basis of AI systems today. This inability of AI systems to have feelings has continued to remain a major setback in the acceptability (indeed trust) and utilization of AI. As we agree that the spirit in AI is possible, then overlooking efforts aimed at making this possible or allowing AI to attain this level unhindered (admitting dangers of human involvement in AI) could pose a dangerous threat which can become highly destructive to mankind. This calls for critical supervision (TD-based ethical policing) and the accompanying of the evolution, development and applicability of AI hence venerating the need for human mediation in AI both as a major TD research subject and applicable function.
 
 A discussion would be made on my approach which considers the synergy of critical systems heuristics (CSH) and systems engineering (Transdisciplinary Systems Engineering) to create "Transdisciplinary AI" which would formulate methods of integrating "human" epistemology in expert systems. Human epistemology is emphasized because by the maturity of this future nature of AI, there would be terms known as "AI or machine epistemology" or "AI or machine spirits". The investigation begins with creating expert systems (knowledge-based systems) with these functions with plans of moving into robotics and other machine learning arena. Finally, to move the needle on what is considered permissible epistemology or permissible spirit of/in AI is a critical component of the study of human mediation and AI which must be given critical attention. This would be discussed as well.

  • Research Article
  • Cite Count Icon 1
  • 10.1609/aies.v7i1.31713
What to Trust When We Trust Artificial Intelligence (Extended Abstract)
  • Oct 16, 2024
  • Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
  • Duncan Purves + 2 more

What to Trust When We Trust Artificial Intelligence Abstract: So-called “trustworthy AI” has emerged as a guiding aim of industry leaders, computer and data science researchers, and policy makers in the US and Europe. Often, trustworthy AI is characterized in terms of a list of criteria. These lists usually include at least fairness, accountability, and transparency. Fairness, accountability, and transparency are valuable objectives, and they have begun to receive attention from philosophers and legal scholars. However, those who put forth criteria for trustworthy AI have failed to explain why satisfying the criteria makes an AI system—or the organizations that make use of the AI system—worthy of trust. Nor do they explain why the aim of trustworthy AI is important enough to justify devoting resources to achieve it. It even remains unclear whether an AI system is the sort of thing that can be trustworthy or not. To explain why fairness, accountability, and transparency are suitable criteria for trustworthy AI one needs an analysis of trustworthy AI. Providing an analysis of trustworthy AI is a distinct task from providing criteria. Criteria are diagnostic; they provide a useful test for the phenomenon of interest, but they do not purport to explain the nature of the phenomenon. It is conceivable that an AI system could lack transparency, accountability, or fairness while remaining trustworthy. An analysis of trustworthy AI provides the fundamental features of an AI system in virtue of which it is (or is not) worthy of trust. An AI system that lacks these features will, necessarily, fail to be worthy of trust. This paper puts forward an analysis of trustworthy AI that can be used to critically evaluate criteria for trustworthy AI such as fairness, accountability, and transparency. In this paper we first make clear the target concept to be analyzed: trustworthy AI. We argue that AI, at least in its current form, should be understood as a distributed, complex system embedded in a larger institutional context. This characterization of AI is consistent with recent definitions proposed by national and international regulatory bodies, and it eliminates some unhappy ambiguity in the common usage of the term. We further limit the scope of our discussion to AI systems which are used to inform decision-making about qualification problems, problems wherein a decision-maker must decide whether an individual is qualified for some beneficial or harmful treatment. We argue that, given reasonable assumptions about the nature of trust and trustworthiness, only AI systems that are used to inform decision-making about qualification problems are appropriate candidates for attributions of (un)trustworthiness. We then distinguish between two models of trust and trustworthiness that we find in the existing literature. We motivate our account by highlighting this as a dilemma in in the accounts of trustworthy AI that have previously been offered. These accounts claim that trustworthiness is either exclusive to full agents (and it is thus nonsense when we talk of trustworthy AI), or they offer an account of trustworthiness that collapses into mere reliability. The first sort of account we refer to as an agential account and the second sort we refer to as a reliability account. We offer that one of the core challenges of putting forth an account of trustworthy AI is to avoid reducing to one of these two camps. It is thus a desideratum of our account that it avoids being exclusive to full moral agents, while it simultaneously avoids capturing things such as mere tools. We go on to propose our positive account which we submit avoids these twin pitfalls. We subsequently argue that if AI can be trustworthy, then it will be trustworthy on an institutional model. Starting from an account of institutional trust offered by Purves and Davis, we argue that trustworthy AI systems have three features: they are competent with regard to the task they are assigned, they are responsive to the morally salient facts governing the decision-making context in which they are deployed, and they publicly provide evidence of these features. As noted, this account builds on a model of institutional trust offered by Purves and Davis and an account of default trust from Margaret Urban Walker. The resulting account allows us to accommodate the core challenge of finding a balance between agential accounts and reliability accounts. We go on to refine our account, answer objections, and revisit the list criteria from above as explained in terms of competence, responsiveness, and evidence.

  • Conference Article
  • Cite Count Icon 4
  • 10.1145/3462757.3466099
AI systems and product liability
  • Jun 21, 2021
  • Georg Borges

The article examines whether the current product liability law provides an appropriate regulation for AI systems. This question, which is discussed at the example of the European Product Liability Directive, is of great practical importance in the current legal policy discussion on liability for AI systems. This article demonstrates that in principle the liability requirements are also applicable to AI systems. If the conduct of an AI system is carefully distinguished from its properties, excessive liability can be avoided. To reverse the burden of proof in favour of the injured party in the case of faulty behaviour enables a liability regime that is fair to the interests at stake. However, product liability law only applies if AI systems lead directly to personal injury or damage to property. Product liability law is not applicable insofar as AI systems indirectly lead to considerable disadvantages for the person concerned, in particular through assessments of persons. Protection against discrimination or otherwise unfair assessments by AI systems shall be effected by other legal instruments.

  • Research Article
  • Cite Count Icon 17
  • 10.1109/mic.2021.3101919
Knowledge-Intensive Language Understanding for Explainable AI
  • Sep 1, 2021
  • IEEE Internet Computing
  • Amit Sheth + 3 more

AI systems have seen significant adoption in various domains. At the same time, further adoption in some domains is hindered by the inability to fully trust an AI system that it will not harm a human. Besides, fairness, privacy, transparency, and explainability are vital to developing trust in AI systems. As stated in Describing Trustworthy AI,aa.https://www.ibm.com/watson/trustworthy-ai. “Trust comes through understanding. How AI-led decisions are made and what determining factors were included are crucial to understand.” The subarea of explaining AI systems has come to be known as XAI. Multiple aspects of an AI system can be explained; these include biases that the data might have, lack of data points in a particular region of the example space, fairness of gathering the data, feature importances, etc. However, besides these, it is critical to have human-centered explanations directly related to decision-making, similar to how a domain expert makes decisions based on “domain knowledge,” including well-established, peer-validated explicit guidelines. To understand and validate an AI system's outcomes (such as classification, recommendations, predictions) that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use. Contemporary XAI methods are yet addressed explanations that enable decision-making similar to an expert. Figure 1 shows the stages of adoption of an AI system into the real world.

  • PDF Download Icon
  • Supplementary Content
  • Cite Count Icon 16
  • 10.3389/fpsyg.2022.836650
AI and Ethics When Human Beings Collaborate With AI Agents
  • Mar 4, 2022
  • Frontiers in Psychology
  • José J Cañas

The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is no co-supervision, neither collaborator can be held co-responsible for the actions of the other. When the collaboration is between a person and an AI system, co-supervision is also necessary to ensure that the objectives of the activity are achieved, but this also means that there is co-responsibility for the consequences of the activities. Therefore, if each agent's responsibility for the consequences of the activity depends on the effectiveness and efficiency of the supervision that that agent performs over the other agent's actions, it will be necessary to take into account the way in which that supervision is carried out and the factors on which it depends. In the case of the human supervision of the actions of an AI system, there is a wealth of psychological research that can help us to establish cognitive and non-cognitive boundaries and their relationship to the responsibility of humans collaborating with AI systems. There is also psychological research on how an external observer supervises and evaluates human actions. This research can be used to programme AI systems in such a way that the boundaries of responsibility for AI systems can be established. In this article, we will describe some examples of how such research on the task of supervising the actions of another agent can be used to establish lines of shared responsibility between a human being and an AI system. The article will conclude by proposing that we should develop a methodology for assessing responsibility based on the results of the collaboration between a human being and an AI agent during the performance of one common activity.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 14
  • 10.3390/app12073341
AI and Clinical Decision Making: The Limitations and Risks of Computational Reductionism in Bowel Cancer Screening
  • Mar 25, 2022
  • Applied Sciences
  • Saleem Ameen + 3 more

Advances in artificial intelligence in healthcare are frequently promoted as ‘solutions’ to improve the accuracy, safety, and quality of clinical decisions, treatments, and care. Despite some diagnostic success, however, AI systems rely on forms of reductive reasoning and computational determinism that embed problematic assumptions about clinical decision-making and clinical practice. Clinician autonomy, experience, and judgement are reduced to inputs and outputs framed as binary or multi-class classification problems benchmarked against a clinician’s capacity to identify or predict disease states. This paper examines this reductive reasoning in AI systems for colorectal cancer (CRC) to highlight their limitations and risks: (1) in AI systems themselves due to inherent biases in (a) retrospective training datasets and (b) embedded assumptions in underlying AI architectures and algorithms; (2) in the problematic and limited evaluations being conducted on AI systems prior to system integration in clinical practice; and (3) in marginalising socio-technical factors in the context-dependent interactions between clinicians, their patients, and the broader health system. The paper argues that to optimise benefits from AI systems and to avoid negative unintended consequences for clinical decision-making and patient care, there is a need for more nuanced and balanced approaches to AI system deployment and evaluation in CRC.

  • Research Article
  • Cite Count Icon 3
  • 10.56315/pscf9-22metz
Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World
  • Sep 1, 2022
  • Perspectives on Science and Christian Faith
  • Cade Metz

Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World

  • Research Article
  • Cite Count Icon 51
  • 10.1007/s00146-021-01145-9
The AI doctor will see you now: assessing the framing of AI in news coverage
  • Mar 8, 2021
  • AI & SOCIETY
  • Mercedes Bunz + 1 more

One of the sectors for which Artificial Intelligence applications have been considered as exceptionally promising is the healthcare sector. As a public-facing sector, the introduction of AI applications has been subject to extended news coverage. This article conducts a quantitative and qualitative data analysis of English news media articles covering AI systems that allow the automation of tasks that so far needed to be done by a medical expert such as a doctor or a nurse thereby redistributing their agency. We investigated in this article one particular framing of AI systems and their agency: the framing that positions AI systems as (1a) replacing and (1b) outperforming the human medical expert, and in which (2) AI systems are personified and/or addressed as a person. The analysis of our data set consisting of 365 articles written between the years 1980 and 2019 will show that there is a tendency to present AI systems as outperforming human expertise. These findings are important given the central role of news coverage in explaining AI and given the fact that the popular frame of ‘outperforming’ might place AI systems above critique and concern including the Hippocratic oath. Our data also showed that the addressing of an AI system as a person is a trend that has been advanced only recently and is a new development in the public discourse about AI.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 17
  • 10.1145/3610067
How Stated Accuracy of an AI System and Analogies to Explain Accuracy Affect Human Reliance on the System
  • Sep 28, 2023
  • Proceedings of the ACM on Human-Computer Interaction
  • Gaole He + 2 more

AI systems are increasingly being used to support human decision making. It is important that AI advice is followed appropriately. However, according to existing literature, users typically under-rely or over-rely on AI systems, and this leads to sub-optimal team performance. In this context, we investigate the role of stated system accuracy by contrasting the lack of system information with the presence of system accuracy in a loan prediction task. We explore how the degree to which humans understand system accuracy influences their reliance on the AI system, by investigating numeracy levels and with the aid of analogies to explain system accuracy in a first of its kind between-subjects study (N=281). We found that explaining the stated accuracy of a system using analogies failed to help users rely on the AI systemappropriately (i.e., the tendency of users to rely on the system when the system is correct, or on themselves otherwise). To eliminate the impact of subjective attitudes towards analogy domains, we conducted a within-subjects study (N=248) where each participant worked on tasks with analogy-based explanations from different domains. Results from this second study confirmed that explaining stated accuracy of the system with analogies was not sufficient to facilitate appropriate reliance on the AI system in the context of loan prediction tasks, irrespective of individual user differences. Based on our findings from the two studies, we reason that the under-reliance on the AI system may be a result of users' overestimation of their own ability to solve the given task. Thus, although familiar analogies can be effective in improving the intelligibility of stated accuracy of the system, an improved understanding of system accuracy does not necessarily lead to improved system reliance and team performance.

  • Research Article
  • 10.51583/ijltemas.2025.1409000068
Designing AI Systems that Support Fairness Across Distributive, Procedural, and Interactional Justice Dimensions
  • Oct 8, 2025
  • International Journal of Latest Technology in Engineering Management & Applied Science
  • Mr.Vaivaw Kumar Singh + 1 more

Abstract: The need for the most fair AI systems has been overemphasized as the AI influence keeps growing and critical decisions, among others, states by the healthcare, finance, and human resources sectors, are made. AI fairness is not only about the fair distribution of results but also it involves fair processes in which decisions are made and the features of the interactions between the AI system and users. This article uses the concepts of organizational justice as a frame to explain the ways by which the design of an AI system could become a vehicle for: distributive justice (fair distribution of resources and results); procedural justice (decision, making process that is open and impartial); and interactional justice (communication that is respectful and empathetic). The conjunction of the three dimensions that the AI system can facilitate will make it possible for the latter to be more in line with human values and hence receive more trust, legitimacy, and acceptance from the stakeholders (Colquitt et al., 2013; Binns, 2018). This paper also refers to the various ways which include bias mitigation techniques, algorithmic transparency, and user, centric interfaces that bring fairness into the system. Further on, the authors explain the present continuous issues (for instance, data bias and ethical tradeoffs) and recommend future research directions for enhancing just AI systems at the end of this paper (Miller, 2017; Selbst et al., 2019).

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.irle.2023.106153
Market for artificial intelligence in health care and compensation for medical errors
  • Jun 27, 2023
  • International Review of Law and Economics
  • Bertrand Chopard + 1 more

Market for artificial intelligence in health care and compensation for medical errors

  • Conference Article
  • Cite Count Icon 44
  • 10.1145/3544548.3581025
Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems
  • Apr 19, 2023
  • Gaole He + 2 more

The dazzling promises of AI systems to augment humans in various tasks hinge on whether humans can appropriately rely on them. Recent research has shown that appropriate reliance is the key to achieving complementary team performance in AI-assisted decision making. This paper addresses an under-explored problem of whether the Dunning-Kruger Effect (DKE) among people can hinder their appropriate reliance on AI systems. DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance. Through an empirical study (N = 249), we explored the impact of DKE on human reliance on an AI system, and whether such effects can be mitigated using a tutorial intervention that reveals the fallibility of AI advice, and exploiting logic units-based explanations to improve user understanding of AI advice. We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems, which hinders optimal team performance. Logic units-based explanations did not help users in either improving the calibration of their competence or facilitating appropriate reliance. While the tutorial intervention was highly effective in helping users calibrate their self-assessment and facilitating appropriate reliance among participants with overestimated self-assessment, we found that it can potentially hurt the appropriate reliance of participants with underestimated self-assessment. Our work has broad implications on the design of methods to tackle user cognitive biases while facilitating appropriate reliance on AI systems. Our findings advance the current understanding of the role of self-assessment in shaping trust and reliance in human-AI decision making. This lays out promising future directions for relevant HCI research in this community.

More from: European Pharmaceutical Law Review
  • Research Article
  • 10.21552/eplr/2021/4/6
Portugal ∙ Resilience of Critical Infrastructures: The Portuguese Government Protection of Pharmaceutical and Medical Devices Industries Falls Short
  • Jan 1, 2021
  • European Pharmaceutical Law Review
  • M Ricardo

  • Open Access Icon
  • Research Article
  • 10.21552/eplr/2021/1/3
Editorial
  • Jan 1, 2021
  • European Pharmaceutical Law Review
  • S Röttger-Wirtz

  • Research Article
  • 10.21552/eplr/2020/4/8
The Italian Criminal Court of Rome: The Latest Chapter of the Avastin-Lucentis Saga
  • Jan 1, 2021
  • European Pharmaceutical Law Review
  • G Ragucci

  • Research Article
  • 10.21552/eplr/2021/2/9
New EU Pharmaceutical Law and Policy
  • Jan 1, 2021
  • European Pharmaceutical Law Review

  • Open Access Icon
  • Research Article
  • 10.21552/eplr/2021/4/3
Editorial
  • Jan 1, 2021
  • European Pharmaceutical Law Review
  • S Röttger-Wirtz

  • Research Article
  • 10.21552/eplr/2021/4/9
New EU Pharmaceutical Law and Policy
  • Jan 1, 2021
  • European Pharmaceutical Law Review

  • Research Article
  • Cite Count Icon 1
  • 10.21552/eplr/2021/4/4
Are You AI’S Favourite? EU Legal Implications of Biased AI Systems in Clinical Genetics and Genomics
  • Jan 1, 2021
  • European Pharmaceutical Law Review
  • A Kiseleva + 1 more

  • Open Access Icon
  • Research Article
  • 10.21552/eplr/2020/4/3
Editorial
  • Jan 1, 2021
  • European Pharmaceutical Law Review

  • Research Article
  • 10.21552/eplr/2021/3/10
New EU Pharmaceutical Law and Policy
  • Jan 1, 2021
  • European Pharmaceutical Law Review

  • Research Article
  • 10.21552/eplr/2021/2/8
Jurisdiction
  • Jan 1, 2021
  • European Pharmaceutical Law Review

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon