Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Artificial Intelligence Learning
  • Artificial Intelligence Learning
  • Machine Artificial Intelligence
  • Machine Artificial Intelligence
  • Artificial Machine
  • Artificial Machine

Articles published on Artificial Intelligence In Healthcare

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2783 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1186/s12910-026-01404-8
Ethical concerns and strategies for implementing artificial intelligence in healthcare: a review of empirical studies.
  • Feb 7, 2026
  • BMC medical ethics
  • Betelhem Zewdu Wubineh + 2 more

Artificial intelligence (AI) is profoundly transforming the healthcare landscape, presenting unprecedented opportunities to enhance patient care and clinical outcomes. However, the rapid integration of AI technologies has raised significant ethical concerns, requiring rigorous scrutiny to ensure their responsible and equitable use. This study aimed to explore the ethical considerations and strategies related to the implementation of AI in healthcare through a systematic review. A systematic search identified 243 publications published between 2019 and 2025 that were initially identified. After applying inclusion and exclusion criteria, 22 papers were selected for final synthesis to assess ethical concerns and strategies related to AI in healthcare. The analysis identified key ethical concerns, categorizing them into six distinct groups: (1) Transparency and Trust, (2) Bias and Fairness, (3) Privacy and Data Security, (4) Accountability and Responsibility, (5) Ethical and Moral, (6) Regulatory and Legal. Additionally, several ethical strategies were identified in the implementation of AI systems, including adherence to ethical principles, standards, and frameworks; transparency and bias mitigation; monitoring and auditing of AI systems; and stakeholder involvement and governance in decision-making processes. This review emphasizes the importance of addressing these ethical concerns to ensure the successful implementation of AI technologies in healthcare. The findings provide valuable insights and recommendations for stakeholders, including developers, healthcare professionals, and policymakers, to guide the ethical deployment of AI decision support systems in healthcare.

  • New
  • Research Article
  • 10.3390/sci8020036
Governing Healthcare AI in the Real World: How Fairness, Transparency, and Human Oversight Can Coexist: A Narrative Review
  • Feb 6, 2026
  • Sci
  • Paolo Bailo + 5 more

Artificial intelligence (AI) is rapidly shifting from experimental pilots to mainstream clinical infrastructure, redefining how evidence, accountability, and ethics intersect in healthcare. This narrative review integrates insights from peer-reviewed studies and policy frameworks to examine seven cross-cutting aspects: bias and fairness, explainability, safety and quality, privacy and data protection, accountability and liability, human oversight, and procurement and deployment. Findings reveal persistent inequities driven by dataset bias and opaque design; the need for explainability tools that are validated, task-specific, and usable by clinicians; and the centrality of post-market surveillance for sustaining patient safety. Privacy-preserving methods such as federated learning and differential privacy show promise but demand rigorous validation and regulatory coherence. Emerging liability models advocate shared enterprise responsibility, while governance-by-design—embedding transparency, auditability, and equity across the AI lifecycle—appears most effective in balancing innovation with public trust. Ethical, legal, and technical safeguards must evolve together to ensure that AI augments, rather than replaces, clinical judgment and institutional accountability.

  • New
  • Research Article
  • 10.3389/frhs.2025.1721620
Attitude and perception toward artificial intelligence among German physicians with intensive care experience: a survey study
  • Feb 5, 2026
  • Frontiers in Health Services
  • G D Giebel + 12 more

Introduction The applications of artificial intelligence (AI) in healthcare are very diverse. AI-based systems can assist with diagnosis and decision-making, particularly in intensive care medicine. However, physicians must accept these systems to fully exploit their potential. We investigated attitude and perception toward AI among physicians with intensive care experience. Methods A cross-sectional questionnaire survey was conducted between August and October 2024 among 7,475 physicians with intensive care experience. Participants were recruited via the hospital operator Knappschaftskliniken GmbH, the German Sepsis Society and via an address register. The questionnaire collected background information on the participants as well as their attitude toward and perception to AI. Their general attitudes toward AI were assessed using the validated Attari-12 tool. Questions specifically addressing attitude and perception of AI in healthcare were developed independently. Descriptive statistics and subgroup analysis were conducted. Results Of the 7,475 physicians initially contacted, 620 returned the questionnaire. Of these, 445 questionnaires were included in the evaluation. Most were male (81.8%) aged over 50 years in leadership positions (92.1%). In both cases, general and health care specific, the attitude toward AI was rather positive. The majority of physicians asked for AI applications that are comprehensible to the treating physicians (87.1%) and agreed that objective values alone are not always sufficient for making medical decisions (87.3%). Furthermore, physicians faced problems in finding reliable information about AI in healthcare (52.6%) and only 21.6% considered communication about AI in the medical community as appropriate. Subgroup analysis revealed few differences for age and gender. The correlation between conscious use of AI in a professional context and attitude toward it was notable. Discussion Physicians with intensive care experience generally hold a positive attitude toward AI, particularly in healthcare. However, the sample was predominantly male, older, and in leadership positions, so these findings may not fully reflect the attitudes of younger or female physicians. Several considerations were highlighted: AI outputs should be interpretable, clinical decisions cannot rely solely on objective data, and physicians need reliable information and guidance for further AI education. Leveraging the positive attitude could help make healthcare systems more efficient, effective, and sustainable.

  • New
  • Research Article
  • 10.64187/sh.2026.v2.i1.001
Artificial Intelligence in Healthcare: Review
  • Feb 5, 2026
  • SmartHealth
  • Irina Negut + 1 more

Artificial intelligence (AI) is becoming a steady presence in healthcare, changing how clinicians diagnose illness, plan treatment, and run medical systems. This review looks at how AI is being used in real medical settings today and examines the complex challenges and ethical questions that follow. Our goal is to offer a clear view of what AI can currently achieve, where it falls short, and how it might shape the future of patient care. To build this picture, we analyzed a wide range of research from the past ten years, focusing on AI's role in areas like diagnostics, treatment support, and patient management. We prioritized studies that provided a clear and comprehensive view, drawing insights from diverse healthcare fields and settings worldwide. The findings show that AI, particularly through machine learning and deep learning, is already making a difference. In specialties like radiology, oncology, and cardiology, it helps to detect diseases more accurately and forecast patient outcomes, which in turn supports clinical decisions and improves workflow. However, this progress is accompanied by serious ethical concerns. Issues of data privacy, hidden biases in algorithms, and a need for greater transparency and accountability are prominent. There is clear evidence that algorithmic bias can worsen health disparities, especially for underrepresented groups. In conclusion, AI in healthcare is a double-edged sword. It holds tremendous promise for improving patient care through smarter tools and greater efficiency, but it also forces us to confront crucial ethical issues. Moving forward, it is essential to build frameworks that ensure ethical standards, promote fairness, and actively reduce bias. The continued evolution of AI will depend on strong collaboration between technologists and healthcare professionals, ensuring we harness its power responsibly to earn patient trust and improve health for everyone.

  • New
  • Research Article
  • 10.2196/77481
An Explanation User Interface for Artificial Intelligence-Supported Mechanical Ventilation Optimization for Clinicians: User-Centered Design and Formative Usability Study.
  • Feb 3, 2026
  • JMIR formative research
  • Ian-C Jung + 4 more

The integration of artificial intelligence (AI) into clinical decision support systems (CDSSs) for mechanical ventilation in intensive care units (ICUs) holds great potential. However, the lack of transparency and explainability hinders the adoption of opaque AI models in clinical practice. Explanation user interfaces (XUIs), incorporating explainable AI algorithms, are considered a key solution to enhance trust and usability. Despite growing research on explainable AI in health care, little is known about how clinicians perceive and interact with such explanation interfaces in high-stakes environments such as the ICU. Addressing this gap is essential to ensure that AI-supported CDSS are not only accurate but also trusted, interpretable, and seamlessly integrated into clinical workflows. This study aimed to evaluate the first iteration of the design and evaluation phase of an XUI for an AI-based CDSS intended to optimize mechanical ventilation in the ICU. Specifically, it explores how different user groups-ICU nurses and physicians-perceive and prioritize explanation concepts, providing the empirical foundation for subsequent refinement iterations. A midfidelity prototype was developed using the prototyping software Justinmind, based on existing guidelines, scientific literature, and insights from previous user-centered design (UCD) phases. The design process followed ISO (International Organization for Standardization) 9241-210 principles for UCD and combined qualitative and quantitative feedback to identify usability strengths, design challenges, and role-specific explanation needs. The prototype was evaluated formatively through 2 usability walkthroughs (walkthrough 1: 4 resident physicians and walkthrough 2: 4 ICU nurses), which included guided group discussions and Likert-scale assessments of explanation concepts in terms of understandability, suitability, and visual appeal. The XUI was structured into 2 levels: a first level displaying high-level explanations (outlier warning and output certainty) alongside the CDSS output, and a second level offering more detailed explanations (available input, feature importance, and rule-based explanation) for users seeking deeper insight. While both user groups appreciated the first level, physicians found the second level of the XUI useful, whereas ICU nurses found it overly detailed. Thus, the structure was able to address the differing needs for explanations. The layered design helped balance transparency and information overload by providing initially concise explanations and more detailed ones on demand. The evaluation further strengthened evidence for role-dependent explanation needs, suggesting that nurses prefer actionable, concise insights, whereas physicians benefit from more granular transparency information. This study underscores the importance of UCD in designing XUIs for CDSS. It highlights the differing information needs of physicians and ICU nurses, emphasizing the value of involving users early in the development of suitable XUIs. The findings provide practical guidance for designing layered, role-sensitive explanation interfaces in critical care and form the basis for future iterative evaluations and experimental studies assessing their impact on decision-making and clinician trust.

  • New
  • Research Article
  • 10.1038/s41598-026-37779-2
Expectations and concerns of primary healthcare patients in rural areas and small towns in Poland regarding artificial intelligence.
  • Feb 3, 2026
  • Scientific reports
  • Justyna Kęczkowska + 2 more

The integration of artificial intelligence (AI) into healthcare presents transformative opportunities, but patient perspectives, particularly from digitally excluded populations, remain underexplored. This study aimed to analyze the awareness, acceptance and concerns regarding the use of AI in healthcare among primary care patients in rural and small-town regions of Poland. This is characteristic of the country, as over 60% of its population lives in such regions. It also sought to identify the demographic and psychosocial determinants of trust in AI. A cross-sectional survey was conducted using a paper questionnaire distributed to 545 adult patients in three primary care facilities in towns with populations below 20,000. Demographics, digital literacy, and attitudes towards AI were assessed. Statistical analyses included non-parametric tests and ordinal logistic regression. Most of the respondents expressed neutrality (43%) or a negative (31%) attitudes toward AI. Only 12.7% had direct experience with AI, and full trust in AI-assisted diagnoses was low (5.9%). Education was the strongest predictor of a positive AI attitude (P < 0.001); age was also significant (P = 0.04), while gender and place of residence were not. Most patients (86%) emphasized the importance of medical staff support.Patients in areas of low digital literacy approach AI with cautious optimism, valuing its potential but requiring human oversight. To foster an equitable adoption of AI, communication and education efforts must address patient concerns and expectations.

  • New
  • Research Article
  • 10.1016/j.ijmedinf.2025.106140
Ethical oversight of Artificial Intelligence in Nigerian Healthcare: A qualitative analysis of ethics committee members' perspectives on integration and regulation.
  • Feb 1, 2026
  • International journal of medical informatics
  • David B Olawade + 3 more

Ethical oversight of Artificial Intelligence in Nigerian Healthcare: A qualitative analysis of ethics committee members' perspectives on integration and regulation.

  • New
  • Research Article
  • 10.1111/1460-6984.70172
The Application of Artificial Intelligence in Speech and Language Therapy: Attitudes and Expectations.
  • Feb 1, 2026
  • International journal of language & communication disorders
  • Hanna Ehlert + 1 more

Artificial intelligence (AI) shows promise to support the prevention, diagnosis, and treatment of diseases in medicine and therapy. However, ethics is a priority concern in the development and implementation of AI across sectors. Common ethical themes in the healthcare literature of the last 5 years surround algorithmic bias, accountability, privacy, transparency and trust issues. The question arises how these challenges apply to speech and language therapy (SLT). Stakeholder attitudes towards the use of AI in healthcare have been investigated for the populations of physicians, medical students, and patients. However, no study has yet addressed the specific perspective of speech and language therapists on this technology. Therefore, the aim was to gather insights on the attitudes, hopes and concerns towards the (future) use of artificial intelligence from speech and language therapists working in clinical practice or research. An online survey with 11 closed and three open-ended questions was conducted in four German speaking countries. The quantitative analysis of the results involved correlating demographic factors, such as age, with the responses. The qualitative analysis compared the responses to this survey with the findings of healthcare literature and studies addressing other healthcare stakeholders. Five hundred eighty-seven professionals from Germany, Switzerland, Austria and Liechtenstein answered the questionnaire. In the results all but 3% of the participants expect that AI will be applied at least to some extend in SLT in the future. The majority of the participants (65%) are open-minded towards the application of AI in SLT. Perceived potential benefits show a larger overlap than identified challenges with the existing literature. The possible loss of the 'human-factor' in assessment and therapy is by far the most frequent concern (41%) the participating speech and language therapists have towards the use of AI. Results further reflect the current level of knowledge about this technology in our profession. The use of AI in SLT can have a positive impact, but many factors need to be considered to prepare our profession for this type of technology. These include the expansion of education, the development of guidelines and the establishment of interdisciplinary collaborations all aiming to develop, implement and enable the use of truly beneficial AI-tools for assessment and intervention in SLT. What is already known on this subject Stakeholder involvement is important in the development and implication of artificial intelligence in health care. Stakeholder attitudes towards the use of AI in healthcare have been investigated for the populations of physicians, medical students and patients. However, no study has yet addressed the specific perspective of speech and language therapists on this technology. What this paper adds to existing knowledge The majority of the participants are open-minded towards the application of AI in SLT and think that it will be used in our profession in the future. Perceived potential benefits and challenges align with the literature to some degree. One aspect that is especially emphasised by the participants is the potential loss of 'human-factor' in SLT. Results reflect participants' knowledge on AI as well as a specific therapeutic view on healthcare and intervention. What are the potential or actual clinical implications of this work? AI application in SLT has great potential, but also comes with challenges. Speech and language therapists need to expand their knowledge on this technology, prepare specific guidelines and engage in interdisciplinary collaborations to specify their perspective and needs in developing and implementing AI-software. Only then will it become truly useful for clinicians and they will be able to use it in a responsible and informed way.

  • New
  • Research Article
  • 10.1016/j.legalmed.2025.102764
Artificial intelligence in healthcare: Proposal for a new medico-legal methodology in medical liability.
  • Feb 1, 2026
  • Legal medicine (Tokyo, Japan)
  • Rossana Cecchi + 6 more

Artificial intelligence in healthcare: Proposal for a new medico-legal methodology in medical liability.

  • New
  • Research Article
  • 10.1177/15271544251381228
Regulating AI in Nursing and Healthcare: Ensuring Safety, Equity, and Accessibility in the Era of Federal Innovation Policy.
  • Feb 1, 2026
  • Policy, politics & nursing practice
  • Y Tony Yang + 1 more

The rapid integration of artificial intelligence in healthcare, accelerated by the Trump administration's 2025 AI Action Plan and private sector innovations from companies like Nvidia and Hippocratic AI, poses urgent challenges for nursing and health policy. This policy analysis examines the intersection of federal AI initiatives, emerging healthcare technologies, and nursing workforce implications through document analysis of regulatory frameworks, the federal AI Action Plan's 90+ initiatives, and insights from the American Academy of Nursing's November 2024 policy dialogue on AI transformation. The analysis reveals that while AI demonstrates measurable improvements in discrete clinical tasks-including 16% better medication assessment accuracy and 43% greater precision in identifying drug interactions at $9 per hour compared to nurses' median $41.38 hourly wage-current federal policy lacks critical healthcare-specific safeguards. The AI Action Plan's emphasis on rapid deployment and deregulation fails to address safety-net infrastructure needs, implementation pathways for vulnerable populations, or mechanisms ensuring health equity. Evidence from the Academy dialogue indicates that AI's "technosocial reality" fundamentally alters care delivery while potentially exacerbating disparities in underserved communities, as demonstrated by algorithmic bias in systems like Optum's care allocation algorithm. The findings suggest that achieving equitable AI integration requires comprehensive regulatory frameworks coordinating FDA, CMS, OCR, and HRSA oversight; community-centered governance approaches redistributing decision-making power to affected populations; and nursing leadership in AI development to preserve patient-centered care values. Without proactive nursing engagement in AI governance, healthcare risks adopting technologies that prioritize efficiency over the holistic, compassionate care fundamental to nursing practice.

  • New
  • Research Article
  • 10.1016/j.colegn.2026.01.002
Lessons learned from a research project on AI for healthcare in the Australian healthcare setting
  • Feb 1, 2026
  • Collegian
  • Nicholas Marlow + 3 more

Lessons learned from a research project on AI for healthcare in the Australian healthcare setting

  • New
  • Research Article
  • 10.1016/j.compbiolchem.2025.108611
Generative artificial intelligence and large language models in smart healthcare applications: Current status and future perspectives.
  • Feb 1, 2026
  • Computational biology and chemistry
  • Md Asraful Haque + 1 more

Generative artificial intelligence and large language models in smart healthcare applications: Current status and future perspectives.

  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.ijmedinf.2025.106168
FANS: A framework for automatic assessment of nutritional status based on free-text clinical notes.
  • Feb 1, 2026
  • International journal of medical informatics
  • Jiahui Hu + 8 more

FANS: A framework for automatic assessment of nutritional status based on free-text clinical notes.

  • New
  • Research Article
  • 10.54097/y8w43a98
Predicting Diabetes Using Machine Learning: A Comparative Study of Logistic Regression, k-NN, and SVM
  • Jan 29, 2026
  • Academic Journal of Science and Technology
  • Yuqi Nie

Diabetes mellitus is a common and persistent chronic metabolic disease. Accurate diagnosis is vital to prevent or slow severe complications such as renal failure and cardiovascular diseases. This study applies modern machine learning techniques to predict diabetes using the Pima Indians Diabetes Database. A comparative analysis of three classification algorithms—Logistic Regression, k-Nearest Neighbors, and Support Vector Machines—is conducted to evaluate their predictive performance. Each model is assessed in terms of accuracy, recall, and AUC-ROC metrics. Results show that all three algorithms perform well and hold promise for clinical application. Among them, Logistic Regression achieves the best performance, with 78% accuracy, a recall score of 0.82, and an AUC-ROC value of 0.84, indicating its strong ability to identify true positive cases while minimizing false negatives. These findings demonstrate the potential of machine learning to enhance diagnostic precision and support clinical decision-making in diabetes management. The study highlights the broader role of artificial intelligence in healthcare, emphasizing its capacity to reduce system burdens, optimize resources, and improve patient outcomes. By leveraging such technologies, healthcare systems can move toward more efficient and timely interventions, ultimately transforming patient care.

  • New
  • Research Article
  • 10.3760/cma.j.cn112144-20251017-00413
Expert consensus on the application of artificial intelligence in stomatology
  • Jan 29, 2026
  • Zhonghua kou qiang yi xue za zhi = Zhonghua kouqiang yixue zazhi = Chinese journal of stomatology
  • Y Wei + 12 more

In recent years, artificial intelligence (AI) has rapidly advanced in the field of oral medicine, with applications extending across disease screening, diagnosis assistance, treatment planning, prognosis prediction, and dental education. Powered by deep learning and multimodal analytics, AI can efficiently integrate data from cone beam CT, intraoral scans, and electronic health records, enhancing precision and efficiency in managing dental caries, endodontic and periodontal diseases, oral mucosal lesions, and maxillofacial trauma. AI also contributes to omics research, biomaterial development, and laboratory automation, accelerating translational progress from basic science to clinical practice. Despite these advances, challenges as lack of standardized data governance, limited model interpretability, privacy and security risks, and insufficient clinical validation and regulatory frameworks still remain. This expert consensus provides a comprehensive overview of AI applications in dentistry, outlines core technical pathways, and proposes recommendations related to data governance, platform development, ethics, and regulatory requirements. It aims to provide practical and unified guidance for dental practitioners, healthcare institutions, researchers, and industry stakeholders, promoting safe, standardized, and sustainable development of AI in oral healthcare.

  • New
  • Research Article
  • 10.2196/79613
Ethical Knowledge, Challenges, and Institutional Strategies Among Medical AI Developers and Researchers: Focus Group Study.
  • Jan 28, 2026
  • Journal of medical Internet research
  • Sophia Fantus + 3 more

As artificial intelligence (AI) becomes increasingly embedded in clinical decision-making and preventive care, it is urgent to address ethical concerns such as bias, privacy, and transparency to protect clinician and patient populations. Although prior research has examined the perspectives of medical AI stakeholders, including clinicians, patients, and health system leaders, far less is known about how medical AI developers and researchers understand and engage with ethical challenges as they develop AI tools. This gap is consequential because developers' ethical awareness, decision-making, and institutional environments influence how AI tools are conceptualized and deployed in practice. Thus, it is essential to understand how developers perceive these issues and what supports they identify as necessary for ethical AI development. The objectives of the study were twofold: (1) to examine medical AI developers' and researchers' knowledge, attitudes, and experiences with AI ethics; and (2) to identify recommendations to enhance and strengthen interpersonal and institutional ethics-focused training and support. We conducted 2 semistructured focus groups (60-90 minutes each) in 2024 with 13 AI developers and researchers affiliated with 5 US-based academic institutions. Participants' work spanned a wide variety of medical AI applications, including Alzheimer disease prediction, clinical imaging, electronic health records analysis, digital health, counseling and behavioral health, and genotype-phenotype modeling. Focus groups were conducted via Microsoft Teams, recorded, and transcribed verbatim. We applied conventional qualitative content analysis to inductively identify emerging concepts, categories, and themes. Coding was performed independently by 3 researchers, with consensus reached through iterative team meetings. The analysis identified four key themes: (1) AI ethics knowledge acquisition: participants reported learning about ethics informally through peer-reviewed literature, reviewer feedback, social media, and mentorship rather than through structured training; (2) ethical encounters: participants described recurring ethical challenges related to data bias, patient privacy, generative AI use, commercialization pressures, and a tendency for research environments to prioritize model accuracy over ethical reflection; (3) reflections on ethical implications: participants expressed concern about downstream effects on patient care and clinician autonomy, and model generalizability, noting that rapid technological innovation outpaces regulatory and evaluative processes; and (4) strategies to mitigate ethical concerns: recommendations included clearer institutional guidelines, ethics checklists, interdisciplinary collaboration, multi-institutional data sharing, enhanced institutional review board support, and the inclusion of bioethicists as members of the AI research team. Medical AI developers and researchers recognize significant ethical challenges in their work but lack structured training, resources, and institutional mechanisms to address them. Findings of this study underscore the need for institutions to consider embedding ethics into research processes through practical tools, mentorship, and interdisciplinary partnerships. Strengthening these supports is essential to preparing the next generation of developers to design and deploy ethical AI in health care.

  • New
  • Research Article
  • 10.3389/fmed.2025.1753443
Exploring Agentic AI in Healthcare: A Study on Its Working Mechanism
  • Jan 28, 2026
  • Frontiers in Medicine
  • Parvathaneni Naga Srinivasu + 3 more

Introduction Rapid advancements in artificial intelligence (AI) have ushered in an era of hyperautomation and intelligent orchestration across multiple engineering domains, with healthcare emerging as one of the most impactful application areas. Among recent developments, Agentic AI has gained attention as a sub-domain of AI capable of autonomous operation, decision-making, and goal-driven behavior with minimal human intervention. This study aims to explore the architectural and functional role of Agentic AI in modern healthcare systems. Methods The study adopts a conceptual and analytical approach to examine the core components of Agentic AI, including agent design, decision-making mechanisms, task allocation strategies, agent coordination, and ranking frameworks. It further investigates the integration of emerging 6G networking technologies within Agentic AI architectures. A qualitative case study on remote robotic surgery is presented to illustrate practical applicability. Additionally, a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis is conducted to assess strategic and operational considerations. Results The analysis demonstrates that Agentic AI architectures, when supported by high-speed and low-latency 6G communication, can enable efficient autonomous decision-making and coordinated task execution in complex healthcare workflows. The case study highlights the feasibility of Agentic AI in enabling remote robotic surgery with enhanced responsiveness, precision, and reliability. The SWOT analysis reveals strong potential for scalability and efficiency while also identifying challenges related to ethical governance, system robustness, and security. Discussion The findings suggest that Agentic AI represents a promising paradigm for next-generation healthcare systems, particularly in remote and critical care applications. While the proposed framework offers architectural insights and strategic value, responsible integration requires addressing limitations such as trust, regulatory compliance, and system transparency. Overall, this study provides a holistic understanding of how Agentic AI can be effectively and ethically integrated into healthcare ecosystems.

  • New
  • Research Article
  • 10.61399/ikcusbfd.1640227
Readiness of Nurses and Nursing Students to Use Artificial Intelligence in Healthcare and Influencing Factors
  • Jan 28, 2026
  • İzmir Katip Çelebi Üniversitesi Sağlık Bilimleri Fakültesi Dergisi
  • Seher Çevik Aktura + 1 more

Objective: The acceptance and efficient implementation of artificial Intelligence (AI)-based applications in routine nursing activities depends on readiness towards artificial intelligence. This study aims to explore nurses' and nursing students' knowledge, opinions, and attitudes towards artificial intelligence and the factors that influence them. Material and Method: The analytical cross-sectional study was conducted between March and May 2022, with 580 participants, including nurses (217) and nursing students (363) in a city in eastern Turkey. The data was collected using the “Information Form for Personal Details and Artificial Intelligence Knowledge of Nurses and Students” and the “Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MR)”. Results: This study showed that 46.1% of nurses and 34.4% of nursing students did not know how to use artificial intelligence in nursing. Both nurses and nursing students’ sources of information regarding artificial intelligence in nursing were essentially “social media” and the application they mostly associated the concept of artificial intelligence with was “robots”. More than half of nurses and students were curious about using artificial intelligence in nursing care. The nurses’ and nursing students’ mean MAIRS-MR scores were 67.17±18.19 and 69.41±15.18, respectively. Conclusion: The study demonstrated that nurses and nursing students had a moderate level of readiness for medical artificial intelligence. Keywords: Artificial intelligence, nurse, nursing student, readiness.

  • New
  • Research Article
  • 10.1007/s43681-026-01002-9
Governing by design: algorithmic normativity, clinical standards, and health policy implications of AI in healthcare
  • Jan 28, 2026
  • AI and Ethics
  • Ali Asadollahi

Governing by design: algorithmic normativity, clinical standards, and health policy implications of AI in healthcare

  • New
  • Research Article
  • 10.1007/s11673-025-10518-4
Decision Ownership and Deference to Healthcare AI.
  • Jan 28, 2026
  • Journal of bioethical inquiry
  • Emily Slome

As artificial intelligence (AI) becomes increasingly integrated into healthcare decision-making, concerns arise regarding deference to AI systems. While existing discussions highlight issues such as the opacity of AI outputs and responsibility gaps, this paper introduces a new concern: the impact of deference to AI on decision ownership among healthcare professionals. I argue that reliance on AI can lead to a diminished sense of ownership over decisions based on AI-generated outputs. This consequence is problematic, as a strong sense of decision ownership is essential for ensuring high quality patient care. I recommend that this concern be taken into consideration when implementing the use of AI in healthcare contexts. To address the concern, training and policies should discourage healthcare professionals from simply deferring to AI. Instead, we should encourage such professionals to treat AI as merely an assistive tool, which complements rather than replaces their expertise. Whenever there is any doubt or lack of understanding associated with an AI output, further measures must be taken before a decision is made.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers