Racing Against the Algorithm: Leveraging Inclusive AI as an Antiracist Tool for Brain Health.
Artificial intelligence (AI) is transforming medicine, including neurology and mental health. Yet without equity-centered design, AI risks reinforcing systemic racism. This article explores how algorithmic bias and phenotypic exclusion disproportionately affect marginalized communities in brain health. Drawing on lived experience and scientific evidence, the essay outlines five design principles-centered on inclusion, transparency, and accountability-to ensure AI promotes equity. By reimagining AI as a tool for justice, we can reshape translational science to serve all populations.
- Research Article
44
- 10.1148/rg.230067
- May 1, 2024
- Radiographics : a review publication of the Radiological Society of North America, Inc
Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI. Published under a CC BY 4.0 license. Test Your Knowledge questions for this article are available in the supplemental material. See the invited commentary by Rouzrokh and Erickson in this issue.
- Research Article
7
- 10.4018/ijiit.309582
- Sep 23, 2022
- International Journal of Intelligent Information Technologies
Humans are social beings. Emotions, like their thoughts, play an essential role in decision-making. Today, artificial intelligence (AI) raises expectations for faster, more accurate, more rational, and fairer decisions with technological advancements. As a result, AI systems have often been seen as an ideal decision-making mechanism. But what if these systems decide against you based on gender, race, or other characteristics? Biased or unbiased AI, that's the question! The motivation of this study is to raise awareness among researchers about bias in AI and contribute to the advancement of AI studies and systems. As the primary purpose of this study is to examine bias in the decision-making process of AI systems, this paper focused on (1) bias in humans and AI, (2) the factors that lead to bias in AI systems, (3) current examples of bias in AI systems, and (4) various methods and recommendations to mitigate bias in AI systems.
- Research Article
9
- 10.1007/s11883-024-01190-x
- Feb 16, 2024
- Current Atherosclerosis Reports
Bias in artificial intelligence (AI) models can result in unintended consequences. In cardiovascular imaging, biased AI models used in clinical practice can negatively affect patient outcomes. Biased AI models result from decisions made when training and evaluating a model. This paper is a comprehensive guide for AI development teams to understand assumptions in datasets and chosen metrics for outcome/ground truth, and how this translates to real-world performance for cardiovascular disease (CVD). CVDs are the number one cause of mortality worldwide; however, the prevalence, burden, and outcomes of CVD vary across gender and race. Several biomarkers are also shown to vary among different populations and ethnic/racial groups. Inequalities in clinical trial inclusion, clinical presentation, diagnosis, and treatment are preserved in health data that is ultimately used to train AI algorithms, leading to potential biases in model performance. Despite the notion that AI models themselves are biased, AI can also help to mitigate bias (e.g., bias auditing tools). In this review paper, we describe in detail implicit and explicit biases in the care of cardiovascular disease that may be present in existing datasets but are not obvious to model developers. We review disparities in CVD outcomes across different genders and race groups, differences in treatment of historically marginalized groups, and disparities in clinical trials for various cardiovascular diseases and outcomes. Thereafter, we summarize some CVD AI literature that shows bias in CVD AI as well as approaches that AI is being used to mitigate CVD bias.
- Research Article
- 10.31435/ijitss.3(47).2025.3529
- Aug 12, 2025
- International Journal of Innovative Technologies in Social Science
Introduction and Objective: The increasing global burden of mental health disorders, exacerbated by the COVID-19 pandemic and the limitations of traditional mental health systems, has accelerated interest in digital health solutions. Artificial intelligence (AI) has emerged as a transformative force in mental health care, offering tools for diagnosis, intervention, and patient monitoring. This review aims to explore current applications, opportunities, and ethical challenges of AI-based tools in mental health, with an emphasis on responsible and equitable deployment. Review Methods: A narrative literature review was conducted using PubMed, Scopus, Web of Science, and Google Scholar. Peer-reviewed articles published between 2014 and 2022 were considered, with a focus on interdisciplinary sources covering clinical psychology, digital health technologies, AI development, and medical ethics. Key themes were synthesized across domains to provide a holistic understanding. State of Knowledge: AI technologies, including chatbots, machine learning algorithms, and predictive analytics, are increasingly integrated into mental health services. They offer scalable solutions for screening, personalized intervention, and early risk detection. However, concerns remain about algorithmic bias, privacy, transparency, and the digital divide. The current body of evidence supports AI’s potential to complement—rather than replace—human care, particularly when integrated responsibly within clinical frameworks. Conclusion: AI holds significant promise in improving access, personalization, and efficiency in mental health care. To harness its benefits, interdisciplinary collaboration, robust ethical oversight, and patient-centered design are essential. Further research is needed to evaluate long-term outcomes and ensure AI systems uphold clinical integrity, equity, and trust.
- Book Chapter
- 10.1007/978-3-030-74188-4_16
- Jan 1, 2021
This chapter addresses the current and future challenges of implementing artificial intelligence (AI) in brain and mental health by exploring international regulations of healthcare and AI, and how human rights play a role in these regulations. First, a broad perspective of human rights in AI and human rights in healthcare is reviewed, then regulations of AI in healthcare are discussed, and finally applications of human rights in AI and brain and mental health regulations are considered. The foremost challenge in the blending and development of regulations of AI in healthcare is that currently both AI and healthcare lack accepted international-level regulation. It can be argued that human rights and human rights law are for the most part internationally accepted, and we can use these rights as guidelines for global regulations. However, as philosophical and ethical environments vary across nations, subsequent policies reflect varying conceptions and fulfillments of human rights. Like human rights, the recognized definitions of “AI” and “health” can vary across international borders and even vary within the professions themselves. One of the biggest challenges in the future of AI in brain and mental health will be applying human rights in a practical manner. Initially, the thought of applying human rights in the development of AI in healthcare seems straightforward. In order to develop better AI, better healthcare and, thus, better AI in healthcare, one must simply respect the human rights that are granted by various declarations, covenants, and constitutions. This is so seemingly straightforward that one would think this has already been the case in these developing fields. However, as we explore this notion of applying human rights, we find agreement, disagreement, and variability on a global scale. It is these variabilities that may well hamper the ethical development of AI in brain and mental health internationally.
- Research Article
15
- 10.2196/41089
- Jun 22, 2023
- Journal of Medical Internet Research
Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers. This study's objective was to survey AI specialists in health care to investigate developers' perceptions of bias in AI algorithms for health care applications and their awareness and use of preventative measures. A web-based survey was provided in both German and English language, comprising a maximum of 41 questions using branching logic within the REDCap web application. Only the results of participants with experience in the field of medical AI applications and complete questionnaires were included for analysis. Demographic data, technical expertise, and perceptions of fairness, as well as knowledge of biases in AI, were analyzed, and variations among gender, age, and work environment were assessed. A total of 151 AI specialists completed the web-based survey. The median age was 30 (IQR 26-39) years, and 67% (101/151) of respondents were male. One-third rated their AI development projects as fair (47/151, 31%) or moderately fair (51/151, 34%), 12% (18/151) reported their AI to be barely fair, and 1% (2/151) not fair at all. One participant identifying as diverse rated AI developments as barely fair, and among the 2 undefined gender participants, AI developments were rated as barely fair or moderately fair, respectively. Reasons for biases selected by respondents were lack of fair data (90/132, 68%), guidelines or recommendations (65/132, 49%), or knowledge (60/132, 45%). Half of the respondents worked with image data (83/151, 55%) from 1 center only (76/151, 50%), and 35% (53/151) worked with national data exclusively. This study shows that the perception of biases in AI overall is moderately fair. Gender minorities did not once rate their AI development as fair or very fair. Therefore, further studies need to focus on minorities and women and their perceptions of AI. The results highlight the need to strengthen knowledge about bias in AI and provide guidelines on preventing biases in AI health care applications.
- Research Article
3
- 10.1037/amp0001215
- Jan 1, 2024
- American Psychologist
Research is underway exploring the use of closed-circuit television (CCTV) cameras and artificial intelligence (AI) for suicide prevention research in public locations where suicides occur. Given the sensitive nature and potential implications of this research, this study explored ethical concerns the public may have about research of this nature. Developed based on the principle of respect, a survey was administered to a representative sample of 1,096 Australians to understand perspectives on the research. The sample was aged 18 and older, 53% female, and 9% ethnic minority. Following an explanatory mixed methods approach, interviews and a focus group were conducted with people with a lived experience of suicide and first responders to contextualize the findings. There were broad levels of acceptance among the Australian public. Younger respondents, females, and those declining to state their ethnicity had lower levels of acceptance of CCTV research using AI for suicide prevention. Those with lived experience of suicide had higher acceptance. Qualitative data indicated concern regarding racial bias in AI and police response to suicidal crises and the need for lived experience involvement in the development and implementation of any resulting interventions. Broad public acceptance of the research aligns with the principle of respect for persons. Beneficence emerged in the context of findings emphasizing the importance of meaningfully including people with lived experience in the development and implementation of interventions resulting from this research, while justice emerged in themes expressing concerns about racial bias in AI and police response to mental health crises. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
- Research Article
15
- 10.5206/fpq/2022.3/4.14191
- Dec 21, 2022
- Feminist Philosophy Quarterly
Increasing concerns have been raised regarding artificial intelligence (AI) bias, and in response, efforts have been made to pursue AI fairness. In this paper, we argue that the idea of structural injustice serves as a helpful framework for clarifying the ethical concerns surrounding AI bias—including the nature of its moral problem and the responsibility for addressing it—and reconceptualizing the approach to pursuing AI fairness. Using AI in health care as a case study, we argue that AI bias is a form of structural injustice that exists when AI systems interact with other social factors to exacerbate existing social inequalities, making some groups of people more vulnerable to undeserved burdens while conferring unearned benefits to others. The goal of AI fairness, understood this way, is to pursue a more just social structure with the development and use of AI systems when appropriate. We further argue that all participating agents in the unjust social structure associated with AI bias bear a shared responsibility to join collective action with the goal of reforming the social structure, and we provide a list of practical recommendations for agents in various social positions to contribute to this collective action.
- Conference Article
5
- 10.1109/icict50816.2021.9358507
- Jan 20, 2021
As the world embraces Industry 4.0 with open-hands, Artificial Intelligence has taken centre-stage. AI systems are driving decision making and impacting stakeholders' viewpoints through data. While these systems pamper companies with these new-found efficiencies, they are quite vulnerable to the `garbage in, garbage out' syndrome. In the case of such intelligent systems, the type of `garbage' is biased data. One cannot hope of eliminating bias in machine learning and Artificial Intelligence without addressing the pressing concerns of bias in humans. Although it is deemed as an uphill task by intellectuals in the academia and industry, gradual yet significant steps have been made. This paper intends to measure and mitigate bias in US Employment Demographics. Different algorithms will be applied and a comparison shall be carried out. The social implications of bias in Artificial Intelligence will also be discussed extensively.
- Abstract
- 10.1017/cts.2024.824
- Apr 1, 2025
- Journal of Clinical and Translational Science
Objectives/Goals: We designed a forum to educate participants about bioethical issues in the application of big data (BD) and artificial intelligence (AI) in clinical and translational research (CTR) in underrepresented populations. We sought to determine changes in participants’ interests in ethics, bias, and trustworthiness of AI and BD. Methods/Study Population: 141 individuals registered for the forum, which was advertised to our partner institutions, minority-serving institutions, and community organizations. Registrants received email instructions to complete an AI Trustworthiness (AI-Trust) survey, a questionnaire with integrated qualitative and quantitative measures designed to better understand learners who engaged with the institution-specific AI/Data Science curriculum. Respondents completed the survey using personal devices via a link and QR code, with anonymized responses and enhanced privacy features. 82 people attended; 22 responded to the survey pre-forum and 22 post-forum. Pre- and post-forum responses were qualitatively compared to assess shifts in attitudes toward AI and BD and related interests in ethics, bias, and trustworthiness. Results/Anticipated Results: We found increased interests post- vs. pre-forum in the use of AI for CTR, AI bias and its effects on underrepresented populations, and ethical risk assessment and mitigation strategies for the use of BD to empower research participants. In contrast, trust in AI was lower post- vs. pre-forum. Moreover, respondents also indicated that the current application of AI in healthcare practice would result in increased racial, economic, and gender bias. In comparison, interest in ethical challenges, bioethical considerations, and trustworthiness regarding use of BD and AI in health research and practice did not differ pre- vs. post-forum. Discussion/Significance of Impact: Interest in the application of BD/AI in CTR increased post-forum, but AI distrust and bias expectations also increased, suggesting that learners become more skeptical and discerning as they become more knowledgeable about the complexity of the ethics of AI and BD use in healthcare, especially its application to underrepresented populations.
- Front Matter
- 10.1093/9780198972877.003.0045
- Jun 2, 2025
This chapter examines how artificial intelligence (AI) bias can undermine legal (or legally relevant) norms and standards. It does so by introducing a conceptual distinction between bias in AI (arising from flawed data, programming choices, or emergent algorithmic behaviour) and bias towards AI (where human decision-makers either overtrust or unjustifiably dismiss AI outputs). This distinction can equip legal practitioners with a deeper, yet straightforward understanding of various AI biases and the risks they raise. To mitigate these risks, the chapter explores preventive and corrective strategies, including regulatory sandboxes, fairness-aware AI design, auditing laws, and legal oversight mechanisms. Addressing AI bias is not merely a technical challenge—it is a professional responsibility for legal practitioners who seek to properly navigate the relationship between law and AI.
- Preprint Article
- 10.31219/osf.io/akd7q
- Dec 5, 2024
Artificial intelligence (AI) holds immense potential to revolutionize mental health care by providing scalable, personalized, and accessible solutions. However, systemic biases in AI models pose a significant risk of exacerbating disparities, particularly for minoritized populations, underscoring the critical need for robust frameworks that prioritize equity throughout development and implementation. This Perspective introduces the Bias Reduction and Inclusion through Dynamic Generative Equity (BRIDGE) Model, an innovative framework designed to address these complexities. The BRIDGE Model integrates fair-aware machine learning techniques with co-creation methods, combining quantitative approaches to detect bias in algorithms with qualitative input from stakeholders to ensure cultural relevance and practical application. By leveraging dual-level, iterative feedback loops, the BRIDGE Model establishes a systematic and dynamic process for developing equitable AI systems that align technical rigor with real-world contexts. The TIES Parenting Program, an AI-powered digital intervention for child mental health, is presented as a case study to illustrate how this framework is being applied to address real-world challenges. By bridging technical precision with lived experiences, the BRIDGE Model aspires to foster the creation of equitable, adaptive, and culturally responsive AI systems that advance accessibility, trust, and fairness in mental health care.
- Research Article
- 10.1108/cfw-07-2022-0019
- Sep 12, 2023
- The Case For Women
Learning outcomes This case is designed to enable students to understand the role of women in artificial intelligence (AI); understand the importance of ethics and diversity in the AI field; discuss the ethical issues of AI; study the implications of unethical AI; examine the dark side of corporate-backed AI research and the difficult relationship between corporate interests and AI ethics research; understand the role played by Gebru in promoting diversity and ethics in AI; and explore how Gebru can attract more women researchers in AI and lead the movement toward inclusive and equitable technology. Case overview/synopsis The case discusses how Timnit Gebru (She), a prominent AI researcher and former co-lead of the Ethical AI research team at Google, is leading the way in promoting diversity, inclusion and ethics in AI. Gebru, one of the most high-profile black women researchers, is an influential voice in the emerging field of ethical AI, which identifies issues based on bias, fairness, and responsibility. Gebru was fired from Google in December 2020 after the company asked her to retract a research paper she had co-authored about the pitfalls of large language models and embedded racial and gender bias in AI. While Google maintained that Gebru had resigned, she said she had been fired from her job after she had raised issues of discrimination in the workplace and drawn attention to bias in AI. In early December 2021, a year after being ousted from Google, Gebru launched an independent community-driven AI research organization called Distributed Artificial Intelligence Research (DAIR) to develop ethical AI, counter the influence of Big Tech in research and development of AI and increase the presence and inclusion of black researchers in the field of AI. The case discusses Gebru’s journey in creating DAIR, the goals of the organization and some of the challenges she could face along the way. As Gebru seeks to increase diversity in the field of AI and reduce the negative impacts of bias in the training data used in AI models, the challenges before her would be to develop a sustainable revenue model for DAIR, influence AI policies and practices inside Big Tech companies from the outside, inspire and encourage more women to enter the AI field and build a decentralized base of AI expertise. Complexity academic level This case is meant for MBA students. Social implications Teaching Notes are available for educators only. Subject code CCS 11: Strategy
- Research Article
3
- 10.54254/2755-2721/76/20240576
- Jul 16, 2024
- Applied and Computational Engineering
With the rapid advancement of Artificial Intelligence (AI), the emergence of various AI models such as Stable Diffusion, ChatGPT, and MidJourney has brought numerous benefits and opportunities. Through users' extensive utilization, they have discovered biases towards gender, race, and other factors in these AI systems. This paper focuses on bias in AI and aims to investigate its causes and propose strategies for mitigation. Through a comprehensive literature review, the paper has explored the phenomenon of bias in AI-generated content. Furthermore, we examine the reasons behind bias and solutions from social and intelligence science perspectives. From a social science perspective, we examine the effects of gender bias in AI and highlight the importance of incorporating diversity and gender theory in machine learning. From an intelligence science standpoint, we explore factors like biased datasets, algorithmic fairness, and the role of machine learning randomness in group fairness. Additionally, we discuss the research methodology employed, including the literature search strategy and quantity assessment. The results and discussions confirm the existence of bias in current AI products, particularly in the underrepresentation of women in the AI development field. Finally, we present future perspectives on reducing bias in AI products, including the importance of fair datasets, improved training processes, and increased participation of female engineers and intelligence scientists in the AI field. By addressing bias in AI, the paper can strive for more equitable and responsible AI systems that benefit diverse users and promote social progress.
- Research Article
- 10.59298/iaajb/2025/1313743
- Aug 3, 2025
- IAA Journal of Biological Sciences
Artificial Intelligence (AI) has transformed healthcare by enhancing diagnostic accuracy, treatment personalization, and health service efficiency. However, mounting evidence reveals that AI systems can perpetuate or even amplify existing disparities related to race, gender, socioeconomic status, and geographic location. Biases often originate from imbalanced training datasets, flawed algorithm design, and unequal data collection practices. These biases have led to misdiagnoses, unequal resource allocation, and inadequate treatment recommendations, disproportionately affecting marginalized communities. This review explores the roots of algorithmic bias in healthcare AI, analyzing real-world examples such as COVID-19 triage systems and diagnostic tools that underperform in minority populations. It also examines mitigation strategies, including bias-aware data collection, algorithm design techniques, regulatory frameworks, and stakeholder engagement. Successful case studies and future research directions are presented, emphasizing fairness, transparency, and trust in computational medicine. Establishing robust, bias-resilient AI frameworks is critical to achieving equitable health outcomes and reinforcing the ethical foundations of digital health. Keywords: AI bias, health equity, algorithmic fairness, medical AI, healthcare disparities, machine learning, ethical AI, computational medicine.
- New
- Journal Issue
- 10.1111/cts.v18.11
- Nov 1, 2025
- Clinical and Translational Science
- New
- Research Article
- 10.1111/cts.70393
- Nov 1, 2025
- Clinical and translational science
- New
- Research Article
- 10.1111/cts.70386
- Nov 1, 2025
- Clinical and Translational Science
- New
- Front Matter
- 10.1111/cts.70383
- Nov 1, 2025
- Clinical and Translational Science
- New
- Research Article
- 10.1111/cts.70381
- Nov 1, 2025
- Clinical and Translational Science
- New
- Research Article
- 10.1111/cts.70382
- Nov 1, 2025
- Clinical and Translational Science
- New
- Supplementary Content
- 10.1111/cts.70385
- Oct 31, 2025
- Clinical and Translational Science
- Supplementary Content
- 10.1111/cts.70374
- Oct 29, 2025
- Clinical and Translational Science
- Research Article
- 10.1111/cts.70330
- Oct 28, 2025
- Clinical and Translational Science
- Research Article
- 10.1111/cts.13561
- Oct 24, 2025
- Clinical and Translational Science
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.