Utilization of Ontology to Develop Artificial Intelligence Systems in the Healthcare Industry
ObjectivesOntologies play a crucial role in healthcare systems due to the diversity of concepts, roles, users, and diagnostic and therapeutic methods. They facilitate the development of knowledge bases and the sharing and representation of information. With the integration of artificial intelligence (AI) into healthcare, ontologies can serve as complementary tools to enhance the quality of services.MethodsThis review study examines existing research on the application of ontologies in AI systems within the healthcare industry. By analyzing their applications, benefits, challenges, and limitations, the study seeks to provide a deeper understanding of their impact on advancing AI technologies and improving healthcare processes. In addition, the study offers recommendations for strengthening the development and use of ontologies in intelligent healthcare systems.ResultsThe findings of this review indicate that ontologies enhance the accuracy of results and support medical decision-making by enabling the semantic exchange of diverse and heterogeneous data. They are essential for the development of decision support systems and for fostering intelligent interactions between patients and healthcare systems. Furthermore, ontologies contribute to healthcare decision-making by semantically analyzing the connections between diseases, geographic regions, and environmental factors.ConclusionsThe use of ontologies in healthcare improves data analysis, patient diagnosis, treatment, and decision-making. Ontologies enhance data inference and interoperability in AI systems through data modeling, concept relationship extraction, knowledge enrichment, and information sharing. Given the vast scope of the healthcare domain, the diversity of specialties and data, and the absence of a dedicated ontology development methodology specific to this field, there is a clear need for a tailored and robust methodology.
- Research Article
1
- 10.3233/shti240472
- Aug 22, 2024
- Studies in health technology and informatics
The integration of Artificial Intelligence (AI) in healthcare signifies a substantial shift, offering benefits to patients and healthcare systems while also introducing new risks. The emphasis on patient safety and performance standards is pivotal, especially with the European Union's strides towards regulating AI through the AI Act. This act focuses on classifying AI systems based on risk levels, mandating stringent requirements for high-risk AI, enhancing transparency, and ensuring ethics in AI applications. The concept of an "AI passport" is introduced as a living document detailing the AI system's purpose, ethical declarations, training, evaluation, and potential biases. This passport aims to enhance transparency and safety in medical AI applications, serving as a comprehensive record for patients, clinicians, and stakeholders. The AI passport, structured in JSON format, encapsulates key information about the AI system as a mechanism for continuous performance evaluation and transparency. This initiative may represent a significant step towards mitigating the risks associated with AI in healthcare, emphasizing the importance of accountability, transparency, and patient safety in the development and application of AI technologies.
- Research Article
- 10.21202/jdtl.2025.7
- Mar 27, 2025
- Journal of Digital Technologies and Law
Objective: to identify key ethical, legal and social challenges related to the use of artificial intelligence in healthcare; to develop recommendations for creating adaptive legal mechanisms that can ensure a balance between innovation, ethical regulation and the protection of fundamental human rights. Methods: a multidimensional methodological approach was implemented, integrating classical legal analysis methods with modern tools of comparative jurisprudence. The study covers both the fundamental legal regulation of digital technologies in the medical field and the in-depth analysis of the ethical, legal and social implications of using artificial intelligence in healthcare. Such an integrated approach provides a comprehensive understanding of the issues and well-grounded conclusions about the development prospects in this area.Results: has revealed a number of serious problems related to the use of artificial intelligence in healthcare. These include data bias, nontransparent complex algorithms, and privacy violation risks. These problems can undermine public confidence in artificial intelligence technologies and exacerbate inequalities in access to health services. The authors conclude that the integration of artificial intelligence into healthcare should take into account fundamental rights, such as data protection and non-discrimination, and comply with ethical standards.Scientific novelty: the work proposes effective mechanisms to reduce risks and maximize the potential of artificial intelligence under crises. Special attention is paid to regulatory measures, such as the impact assessment provided for by the Artificial Intelligence Act. These measures play a key role in identifying and minimizing the risks associated with high-risk artificial intelligence systems, ensuring compliance with ethical standards and protection of fundamental rights.Practical significance: adaptive legal mechanisms were developed, that support democratic norms and respond promptly to emerging challenges in public healthcare. The proposed mechanisms allow achieving a balance between using artificial intelligence for crisis management and human rights. This helps to build confidence in artificial intelligence systems and their sustained positive impact on public healthcare.
- Research Article
- 10.1177/09760016241287301
- Nov 6, 2024
- Apollo Medicine
Background and Aims: The integration of artificial intelligence (AI) is expected to revolutionise healthcare, compelling forthcoming healthcare professionals to arm themselves with essential knowledge and skills. Given this, understanding medical students’ (future healthcare providers’) perspectives and readiness is vital for achieving full integration. This study aimed to assess the knowledge, perspectives, and readiness perceived by medical students at Nnamdi Azikiwe University. Methods: This cross-sectional study conveniently recruited 340 medical students. A pretest self-structured questionnaire was utilised for data collection among students who were already in the clinical phase of their study programme. The Statistical Package for Social Science (SPSS) was used for the analysis of the results. Results: The vast majority of the respondents (99.4%) had heard of AI, but only 3.2% were very familiar with its real-world applications. Most participants (96.8%) lacked formal education or training in AI, and few (7.4%) regularly followed AI-related news. Concerns about AI integration included data privacy (39.4%) and the potential loss of human touch in patient care (70.9%). Job displacement (72.1%) and misuse of AI (55.9%) were common fears. Despite these concerns, more than half of the respondents (55.6%) were interested in AI research, and many expressed openness to collaborating with AI systems (34.1%) and acquiring additional AI-related skills (67.9%). Conclusion: There was a lack of AI knowledge among the respondents, coupled with widespread scepticism about its integration. However, there is a notable interest in AI-related research and projects, indicating a willingness to explore its potential benefits.
- Research Article
13
- 10.3389/frhs.2024.1368030
- Jun 11, 2024
- Frontiers in health services
Evidence-based practice (EBP) involves making clinical decisions based on three sources of information: evidence, clinical experience and patient preferences. Despite popularization of EBP, research has shown that there are many barriers to achieving the goals of the EBP model. The use of artificial intelligence (AI) in healthcare has been proposed as a means to improve clinical decision-making. The aim of this paper was to pinpoint key challenges pertaining to the three pillars of EBP and to investigate the potential of AI in surmounting these challenges and contributing to a more evidence-based healthcare practice. We conducted a selective review of the literature on EBP and the integration of AI in healthcare to achieve this. Clinical decision-making in line with the EBP model presents several challenges. The availability and existence of robust evidence sometimes pose limitations due to slow generation and dissemination processes, as well as the scarcity of high-quality evidence. Direct application of evidence is not always viable because studies often involve patient groups distinct from those encountered in routine healthcare. Clinicians need to rely on their clinical experience to interpret the relevance of evidence and contextualize it within the unique needs of their patients. Moreover, clinical decision-making might be influenced by cognitive and implicit biases. Achieving patient involvement and shared decision-making between clinicians and patients remains challenging in routine healthcare practice due to factors such as low levels of health literacy among patients and their reluctance to actively participate, barriers rooted in clinicians' attitudes, scepticism towards patient knowledge and ineffective communication strategies, busy healthcare environments and limited resources. AI presents a promising solution to address several challenges inherent in the research process, from conducting studies, generating evidence, synthesizing findings, and disseminating crucial information to clinicians to implementing these findings into routine practice. AI systems have a distinct advantage over human clinicians in processing specific types of data and information. The use of AI has shown great promise in areas such as image analysis. AI presents promising avenues to enhance patient engagement by saving time for clinicians and has the potential to increase patient autonomy although there is a lack of research on this issue. This review underscores AI's potential to augment evidence-based healthcare practices, potentially marking the emergence of EBP 2.0. However, there are also uncertainties regarding how AI will contribute to a more evidence-based healthcare. Hence, empirical research is essential to validate and substantiate various aspects of AI use in healthcare.
- Conference Article
2
- 10.4271/2023-36-0042
- Jan 8, 2024
<div class="section abstract"><div class="htmlview paragraph">The integration of ergonomics and artificial intelligence (AI) in the automotive industry has the potential to revolutionize the way how vehicles are designed, manufactured and used. The aim of this article is to review the recent literature on the subject and discuss the opportunities and challenges presented by the integration of these two fields. The paper begins defining the ergonomics and the AI and providing an overview of their respective roles in the automotive industry. It then examines the benefits of the integration of ergonomics and AI in the automotive industry, including the optimization of vehicle design and manufacturing process. The enhancement of the driver experience, and improvement of safety accessibility, and customization, however, the integration of ergonomics and AI in the automotive industry also presents challenges, including ethical and legal considerations, data privacy, liability, and the impact on the employment in the automotive industry. The paper reviews research on these challenges and suggests that the development of international standards for the integration of AI in the vehicles may be necessary to ensure that AI systems in vehicle are secure, highlighting the need for future research to explore the integration of ergonomic and AI in the automotive industry. Future research should focus and addressing the ethical, legal, and societal implications of the AI in vehicles, as well as exploring new opportunities for the use of AI in design, manufacturing, and use of vehicles in overall, the integration of ergonomics and AI in the automotive industry has the potential to significantly improve the design and manufacturing of vehicles, as well as enhance the driving experience for users. However, the integration of these two fields also poses challenges that must be addressed, including ethical concerns, legal considerations, and the employment in the automotive industry. By working to overcome these challenges, we ensure that benefits of ergonomics and AI in the automotive industry are fully realized while minimizing their potential negative impacts.</div></div>
- Research Article
- 10.47743/jss-2024-70-2-9
- Jan 1, 2024
- ANALELE ȘTIINŢIFICE ALE UNIVERSITĂŢII „ALEXANDRU IOAN CUZA” DIN IAȘI (SERIE NOUĂ). ȘTIINŢE JURIDICE
Artificial intelligence systems are being used in various areas such as e-commerce, e-government and e-advertising, mainly because of their efficiency and ability to provide fast access to services. However, AI uses a massive amount of data, raising concerns about the privacy and control of this data held by large corporations or government entities. Although there are some risks associated with AI, such as discrimination in AI algorithms and over-reliance on technology, there are also many benefits to be gained from using these systems: automating administrative processes, providing assistance and support to citizens, data analysis and personalised decision-making etc. To maximise benefits and minimise risks, responsible control and management of AI data and systems is required. The integration of artificial intelligence in e-government and in the detection of breaches of the integrity of public functions may prove useful, but it must be accompanied by measures to ensure respect for human rights, data protection, discrimination and the avoidance of violations.
- Research Article
3
- 10.1093/gerona/glaf024
- Feb 6, 2025
- The journals of gerontology. Series A, Biological sciences and medical sciences
Integration of artificial intelligence (AI) in health and healthcare, especially for older adults, has significantly advanced healthcare delivery. AI technologies, with capabilities such as self-learning and pattern recognition, are employed to address social isolation and monitor older adults' daily activities. However, rapid AI development often fails to consider the heterogeneous needs of older populations, which could exacerbate an existing digital divide and inequality. This scoping review examines older adults' involvement in AI system design, implementation, and evaluation of AI systems in health and healthcare literature, emphasizing the necessity of their input for beneficial AI systems. We conducted a scoping review according to PRISMA-SCR. We reviewed 17 studies, finding that half of these studies (n = 8) engaged older adults during the design phase, a small number (n = 3) during the evaluation stage, and even fewer (n = 2) involved older adults in the implementation stage. Despite AI's growing role, design processes often overlook older adults' needs. Our findings emphasize the need for inclusive, participatory design approaches to address ethical and equity challenges, enhancing user engagement and relevance. We also highlight how these approaches address the needs of older adults and improve outcomes. Specifically, we integrated evidence showing the practical benefits of these approaches for better accessibility, usability, and engagement among older adults. Although AI has potential to improve healthcare delivery, these approaches must be part of broader efforts to ensure ethical, inclusive, and equitable AI practices, especially in gerontology.
- Research Article
2
- 10.1108/lhs-01-2025-0018
- Sep 9, 2025
- Leadership in Health Services
Purpose This paper aims to explore the paradigm shift in leadership and strategic management driven by the integration of responsible artificial intelligence (AI) in healthcare. It explores the evolving role of leadership in adapting to AI technologies while ensuring ethical governance, transparency and accountability in healthcare decision-making. Design/methodology/approach This study conducts a comprehensive review of current literature, case studies and industry reports to evaluate the implications of responsible AI adoption in healthcare leadership. It focuses on key areas such as AI-driven decision-making, resource optimisation, crisis management and patient care, while also addressing challenges in integrating AI technologies effectively. Findings The integration of AI in healthcare is transforming leadership from traditional, experience-based decision-making to data-driven, AI-enhanced strategies. Responsible leadership emphasises addressing ethical concerns such as bias, transparency and accountability. AI technologies improve resource allocation, crisis management and patient care, but challenges such as workforce resistance and the need for upskilling healthcare professionals remain. Practical implications Healthcare leaders must adopt a responsible leadership framework that balances AI’s potential with ethical and human-centred care principles. Recommendations include developing AI literacy programmes for healthcare professionals, ensuring inclusivity in AI algorithms and establishing governance policies that promote transparency and accountability in AI applications. Originality/value This paper provides a critical, forward-looking perspective on how responsible AI can drive a paradigm shift in healthcare leadership. It offers novel insights into the integration of AI within healthcare organisations, emphasising the need for leadership that prioritises ethical AI usage and promotes patient well-being in a rapidly evolving digital landscape.
- Research Article
4
- 10.62019/abbdm.v4i1.100
- Feb 9, 2024
- The Asian Bulletin of Big Data Management
The integration of Artificial Intelligence (AI) in healthcare has been impeded by a significant issue: a lack of trust among healthcare professionals, stemming from the opacity of AI decision-making processes and a general unfamiliarity with AI technologies. This study investigates the impact of AI's explainability and healthcare professionals' familiarity with AI on their trust in AI applications within healthcare settings. Adopting a quantitative research methodology, the study utilized a structured questionnaire to gather data from a diverse group of healthcare professionals, including doctors, nurses, and administrators, across various hospitals and healthcare institutions in Pakistan. The research employed a stratified random sampling approach to ensure a comprehensive and representative data set. The results indicated a positive and significant relationship between AI explainability and trust in AI (Path Coefficient: 0.62, t-Value: 5.20), suggesting that clearer and more transparent AI decision-making processes enhance healthcare professionals' trust., Similarly, familiarity with AI was found to positively influence trust in AI (Path Coefficient: 0.48, t-Value: 4.35), highlighting the importance of exposure and understanding of AI systems among healthcare professionals. These findings have crucial implications for both AI developers and healthcare administrators. For AI developers, the emphasis must be on creating more transparent and interpretable AI systems. For healthcare administrators, the results suggest the need to invest in training and educational programs to increase professionals' familiarity with AI, thereby enhancing trust and acceptance. The study significantly contributes to the existing literature by empirically validating the importance of AI explainability and familiarity in building trust in AI within the healthcare context, especially in a developing country setting. For policymakers, these insights are critical in guiding strategies and policies aimed at effectively integrating AI into healthcare systems. By addressing the identified factors, healthcare sectors can better leverage AI's potential, leading to improved patient care and more efficient healthcare operations.
- Book Chapter
- 10.1016/b978-0-443-24788-0.00004-2
- Jan 1, 2025
- Responsible and Explainable Artificial Intelligence in Healthcare
Chapter 4 - Designing transparent and accountable AI systems for healthcare
- Research Article
1
- 10.1007/s00063-024-01117-z
- Mar 28, 2024
- Medizinische Klinik - Intensivmedizin und Notfallmedizin
The integration of artificial intelligence (AI) into intensive care medicine has made considerable progress in recent studies, particularly in the areas of predictive analytics, early detection of complications, and the development of decision support systems. The main challenges remain availability and quality of data, reduction of bias and the need for explainable results from algorithms and models. Methods to explain these systems are essential to increase trust, understanding, and ethical considerations among healthcare professionals and patients. Proper training of healthcare professionals in AI principles, terminology, ethical considerations, and practical application is crucial for the successful use of AI. Careful assessment of the impact of AI on patient autonomy and data protection is essential for its responsible use in intensive care medicine. A balance between ethical and practical considerations must be maintained to ensure patient-centered care while complying with data protection regulations. Synergistic collaboration between clinicians, AI engineers, and regulators is critical to realizing the full potential of AI in intensive care medicine and maximizing its positive impact on patient care. Future research and development efforts should focus on improving AI models for real-time predictions, increasing the accuracy and utility of AI-based closed-loop systems, and overcoming ethical, technical, and regulatory challenges, especially in generative AI systems.
- Conference Article
- 10.54941/ahfe1004656
- Jan 1, 2024
Early detection of clusters of health conditions is essential to proactive clinical and public health interventions. Effective intervention strategies require real-time insights into the health needs of the communities. Artificial Intelligence (AI) systems have emerged as a promising avenue to detect patterns in health indicators at an individual and population level. The purpose of this paper is to describe the novel expanded application of AI to detect clusters in health conditions and community health needs to facilitate real-time intervention and prevention strategies. Case-use examples demonstrate the capabilities of AI to harness a variety of data to improve health outcomes in conditions ranging from infectious diseases, non-communicable diseases, and mental health disorders. AI systems have been utilized in syndromic surveillance to detect cases of infectious diseases prior to laboratory-confirmed diagnosis. These AI systems can analyze data from healthcare facilities, laboratories, and online self-reported symptoms to detect potential outbreaks and facilitate timely vaccination, resource allocation and public health messaging to mitigate the spread of disease. Similarly, the spread of vector-borne diseases can be anticipated through the analysis of historical data, weather reports and incidence of disease to identify areas to deploy vector control measures. In the area of mental health, AI algorithms can analyze diverse data sources such as social media posts, emergency hotline calls, emergency department visits, and hospital admissions to identify clusters related to mental health issues including overdoses, suicides, and burnout. The timely detection of such clusters enables prompt intervention, facilitating deployment of targeted mental health support services and community outreach programs to address these issues in a targeted and proactive manner. Identifying trends and characteristics in chronic disease data can guide screening and intervention strategies in real time. Similarly, AI can enhance pharmacovigilance by identifying previously unknown patterns in adverse drug reactions to inform regulatory bodies, healthcare providers and researchers in efforts to provide data-driven, real-time patient safeguards. By harnessing data from air-quality monitors, health records, and meteorology reports, AI systems identify correlations between environmental factors and health issues to empower efforts to address specific environmental health risks. These case-use examples illustrate the potential for AI to serve as a valuable tool to facilitate real-time, data-driven insights to inform proactive clinical and public health intervention strategies. Ongoing challenges in harnessing AI technology for public health surveillance include data privacy, accessing quality data from diverse data sets, and establishing effective communication channels between AI systems and public health authorities. The use of anonymized data to detect clusters and identify the health needs of health regions is a potential strategy to mitigate these challenges. Available resources are limited and must be deployed in a targeted, informed, and timely manner to be most effective. The integration of AI into an expanded all-risks approach to syndromic surveillance represents the next step in identifying and responding to clusters of health-related events in a proactive manner that aligns with community needs while upholding ethical standards and privacy considerations.
- Research Article
2
- 10.37745/ijeats.13/vol12n27485
- Feb 15, 2024
- International Journal of Engineering and Advanced Technology Studies
The integration of Artificial Intelligence (AI) algorithms into safety-critical applications has become increasingly prevalent across various domains, including autonomous vehicles, medical diagnosis, industrial automation, and aerospace systems. These applications rely heavily on AI to make decisions that directly affect human safety, economic stability, and operational efficiency. Given the critical nature of these tasks, it is essential to rigorously assess the reliability of AI algorithms to ensure they perform consistently and accurately under all conditions. Reliability, in this context, refers to the AI system's ability to function without failure over a specific period, under defined operational conditions. In safety-critical domains, even minor errors or inconsistencies in AI decision-making can lead to catastrophic outcomes, such as traffic accidents involving autonomous vehicles, incorrect medical diagnoses leading to improper treatments, or failures in industrial processes that may cause costly downtime or even human casualties. The increasing complexity and deployment of AI technologies in these domains highlight the urgent need for a comprehensive understanding and evaluation of AI reliability. This paper provides a detailed analysis of the design considerations and methodologies for enhancing the reliability of AI algorithms. The discussion begins by exploring the underlying principles of reliability in AI systems, focusing on both theoretical and practical perspectives. We examine key factors that influence reliability, including data quality, algorithmic robustness, model interpretability, and system integration. The paper then delves into various reliability assessment techniques, such as fault tolerance mechanisms, error detection and correction methods, redundancy, and validation processes. To provide a deeper understanding of reliability in AI, we introduce mathematical models and statistical evaluation techniques that quantify reliability metrics. For instance, reliability modeling using exponential distribution, Monte Carlo simulations for probabilistic reliability analysis, and error propagation studies using Jacobian matrices are presented. We also explore the use of machine learning-specific reliability metrics, such as the Area Under the Curve (AUC) in Receiver Operating Characteristic (ROC) analysis, which helps evaluate the performance of AI in critical decision-making contexts. Furthermore, this paper addresses the current challenges and limitations in ensuring AI reliability, including computational complexity, ethical considerations, and regulatory compliance issues. We highlight the difficulties in developing AI models that can maintain their reliability across diverse and unpredictable real-world scenarios. The potential for bias, lack of transparency in AI decision-making, and difficulties in explaining complex AI models also present significant hurdles that need to be addressed to enhance reliability. The findings and methodologies discussed in this paper aim to contribute to a deeper understanding of the complex landscape of AI reliability, providing a framework for researchers, practitioners, and policymakers to develop safer, more reliable AI systems that can be trusted to operate in environments where safety is paramount.
- Book Chapter
2
- 10.1016/b978-0-323-95068-8.00010-8
- Jan 1, 2024
- Artificial Intelligence in Medicine
Chapter 10 - Human-machine interaction: AI-assisted medicine, instead of AI-driven medicine
- Research Article
13
- 10.1186/s12910-024-01052-w
- May 11, 2024
- BMC medical ethics
BackgroundThe integration of artificial intelligence (AI) in radiography presents transformative opportunities for diagnostic imaging and introduces complex ethical considerations. The aim of this cross-sectional study was to explore radiographers’ perspectives on the ethical implications of AI in their field and identify key concerns and potential strategies for addressing them.MethodsA structured questionnaire was distributed to a diverse group of radiographers in Saudi Arabia. The questionnaire included items on ethical concerns related to AI, the perceived impact on clinical practice, and suggestions for ethical AI integration in radiography. The data were analyzed using quantitative and qualitative methods to capture a broad range of perspectives.ResultsThree hundred eighty-eight radiographers responded and had varying levels of experience and specializations. Most (44.8%) participants were unfamiliar with the integration of AI into radiography. Approximately 32.9% of radiographers expressed uncertainty regarding the importance of transparency and explanatory capabilities in the AI systems used in radiology. Many (36.9%) participants indicated that they believed that AI systems used in radiology should be transparent and provide justifications for their decision-making procedures. A significant preponderance (44%) of respondents agreed that implementing AI in radiology may increase ethical dilemmas. However, 27.8%expressed uncertainty in recognizing and understanding the potential ethical issues that could arise from integrating AI in radiology. Of the respondents, 41.5% stated that the use of AI in radiology required establishing specific ethical guidelines. However, a significant percentage (28.9%) expressed the opposite opinion, arguing that utilizing AI in radiology does not require adherence to ethical standards. In contrast to the 46.6% of respondents voicing concerns about patient privacy over AI implementation, 41.5% of respondents did not have any such apprehensions.ConclusionsThis study revealed a complex ethical landscape in the integration of AI in radiography, characterized by enthusiasm and apprehension among professionals. It underscores the necessity for ethical frameworks, education, and policy development to guide the implementation of AI in radiography. These findings contribute to the ongoing discourse on AI in medical imaging and provide insights that can inform policymakers, educators, and practitioners in navigating the ethical challenges of AI adoption in healthcare.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.