Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Abstract
  • Literature Map
  • Citations
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

CitationsShowing 6 of 6 papers
  • Research Article
  • 10.1016/j.prro.2025.02.006
Transforming the Landscape of Clinical Information Retrieval Using Generative Artificial Intelligence: An Application in Machine Fault Analysis.
  • Feb 1, 2025
  • Practical radiation oncology
  • Tyler Alfonzetti + 1 more

Transforming the Landscape of Clinical Information Retrieval Using Generative Artificial Intelligence: An Application in Machine Fault Analysis.

  • Open Access Icon
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 16
  • 10.1007/s00432-024-05673-x
Large language models as decision aids in neuro-oncology: a review of shared decision-making applications
  • Mar 19, 2024
  • Journal of Cancer Research and Clinical Oncology
  • Aaron Lawson Mclean + 3 more

Shared decision-making (SDM) is crucial in neuro-oncology, fostering collaborations between patients and healthcare professionals to navigate treatment options. However, the complexity of neuro-oncological conditions and the cognitive and emotional burdens on patients present significant barriers to achieving effective SDM. This discussion explores the potential of large language models (LLMs) such as OpenAI's ChatGPT and Google's Bard to overcome these barriers, offering a means to enhance patient understanding and engagement in their care. LLMs, by providing accessible, personalized information, could support but not supplant the critical insights of healthcare professionals. The hypothesis suggests that patients, better informed through LLMs, may participate more actively in their treatment choices. Integrating LLMs into neuro-oncology requires navigating ethical considerations, including safeguarding patient data and ensuring informed consent, alongside the judicious use of AI technologies. Future efforts should focus on establishing ethical guidelines, adapting healthcare workflows, promoting patient-oriented research, and developing training programs for clinicians on the use of LLMs. Continuous evaluation of LLM applications will be vital to maintain their effectiveness and alignment with patient needs. Ultimately, this exploration contends that the thoughtful integration of LLMs into SDM processes could significantly enhance patient involvement and strengthen the patient-physician relationship in neuro-oncology care.

  • Open Access Icon
  • Front Matter
  • Cite Count Icon 5
  • 10.1080/07357907.2024.2347784
Artificial Intelligence in Cancer Clinical Research: I. Introduction
  • Apr 30, 2024
  • Cancer Investigation
  • Gary H Lyman + 1 more

Artificial Intelligence in Cancer Clinical Research: I. Introduction

  • Research Article
  • Cite Count Icon 19
  • 10.1093/jamia/ocae128
A comparative evaluation of ChatGPT 3.5 and ChatGPT 4 in responses to selected genetics questions
  • Jun 14, 2024
  • Journal of the American Medical Informatics Association : JAMIA
  • Scott P Mcgrath + 5 more

ObjectivesTo evaluate the efficacy of ChatGPT 4 (GPT-4) in delivering genetic information about BRCA1, HFE, and MLH1, building on previous findings with ChatGPT 3.5 (GPT-3.5). To focus on assessing the utility, limitations, and ethical implications of using ChatGPT in medical settings.Materials and MethodsA structured survey was developed to assess GPT-4’s clinical value. An expert panel of genetic counselors and clinical geneticists evaluated GPT-4’s responses to these questions. We also performed comparative analysis with GPT-3.5, utilizing descriptive statistics and using Prism 9 for data analysis.ResultsThe findings indicate improved accuracy in GPT-4 over GPT-3.5 (P < .0001). However, notable errors in accuracy remained. The relevance of responses varied in GPT-4, but was generally favorable, with a mean in the “somewhat agree” range. There was no difference in performance by disease category. The 7-question subset of the Bot Usability Scale (BUS-15) showed no statistically significant difference between the groups but trended lower in the GPT-4 version.Discussion and ConclusionThe study underscores GPT-4’s potential role in genetic education, showing notable progress yet facing challenges like outdated information and the necessity of ongoing refinement. Our results, while showing promise, emphasizes the importance of balancing technological innovation with ethical responsibility in healthcare information delivery.

  • Research Article
  • Cite Count Icon 10
  • 10.12968/jowc.2024.33.4.229
Artificial intelligence in wound care: diagnosis, assessment and treatment of hard-to-heal wounds: a narrative review.
  • Apr 2, 2024
  • Journal of Wound Care
  • Mark G Rippon + 4 more

The effective assessment of wounds, both acute and hard-to-heal, is an important component in the delivery by wound care practitioners of efficacious wound care for patients. Improved wound diagnosis, optimising wound treatment regimens, and enhanced prevention of wounds aid in providing patients with a better quality of life (QoL). There is significant potential for the use of artificial intelligence (AI) in health-related areas such as wound care. However, AI-based systems remain to be developed to a point where they can be used clinically to deliver high-quality wound care. We have carried out a narrative review of the development and use of AI in the diagnosis, assessment and treatment of hard-to-heal wounds. We retrieved 145 articles from several online databases and other online resources, and 81 of them were included in this narrative review. Our review shows that AI application in wound care offers benefits in the assessment/diagnosis, monitoring and treatment of acute and hard-to-heal wounds. As well as offering patients the potential of improved QoL, AI may also enable better use of healthcare resources.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 2
  • 10.1093/bjrai/ubae008
Applications and implementation of generative artificial intelligence in cardiovascular imaging with a focus on ethical and legal considerations: what cardiovascular imagers need to know!
  • Mar 4, 2024
  • BJR|Artificial Intelligence
  • Ahmed Marey + 4 more

Abstract Machine learning (ML) and deep learning (DL) have potential applications in medicine. This overview explores the applications of AI in cardiovascular imaging, focusing on echocardiography, cardiac MRI (CMR), coronary CT angiography (CCTA), and CT morphology and function. AI, particularly DL approaches like convolutional neural networks, enhances standardization in echocardiography. In CMR, undersampling techniques and DL-based reconstruction methods, such as variational neural networks, improve efficiency and accuracy. ML in CCTA aids in diagnosing coronary artery disease, assessing stenosis severity, and analyzing plaque characteristics. Automatic segmentation of cardiac structures and vessels using AI is discussed, along with its potential in congenital heart disease diagnosis and 3D printing applications. Overall, AI integration in cardiovascular imaging shows promise for enhancing diagnostic accuracy and efficiency across modalities. The growing use of Generative Adversarial Networks in cardiovascular imaging brings substantial advancements but raises ethical concerns. The “black box” problem in DL models poses challenges for interpretability crucial in clinical practice. Evaluation metrics like ROC curves, image quality, clinical relevance, diversity, and quantitative performance assess GAI models. Automation bias highlights the risk of unquestioned reliance on AI outputs, demanding careful implementation and ethical frameworks. Ethical considerations involve transparency, respect for persons, beneficence, and justice, necessitating standardized evaluation protocols. Health disparities emerge if AI training lacks diversity, impacting diagnostic accuracy. AI language models, like GPT-4, face hallucination issues, posing ethical and legal challenges in healthcare. Regulatory frameworks and ethical governance are crucial for fair and accountable AI. Ongoing research and development are vital to evolving AI ethics.

Similar Papers
  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Research Article
  • Cite Count Icon 28
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).

  • Discussion
  • Cite Count Icon 4
  • 10.1016/j.ebiom.2023.104671
Response to “Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine”
  • Jun 14, 2023
  • eBioMedicine
  • Markus Trengove + 2 more

Response to “Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine”

  • Research Article
  • Cite Count Icon 16
  • 10.1162/daed_e_01897
Getting AI Right: Introductory Notes on AI &amp; Society
  • May 1, 2022
  • Daedalus
  • James Manyika

Getting AI Right: Introductory Notes on AI &amp; Society

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.3389/fhumd.2022.703510
Distribution of Forward-Looking Responsibility in the EU Process on AI Regulation
  • Apr 12, 2022
  • Frontiers in Human Dynamics
  • Maria Hedlund

Artificial Intelligence (AI) is beneficial in many respects, but also has harmful effects that constitute risks for individuals and society. Dealing with AI risks is a future-oriented endeavor that needs to be approached in a forward-looking way. Forward-looking responsibility is about who should do what to remedy or prevent harm. With the ongoing EU policy process on AI development as a point of departure, the purpose of this article is to discuss distribution of forward-looking responsibility for AI development with respect to what the obligations entail in terms of burdens or assets for the responsible agents and for the development of AI. The analysis builds on the documents produced in the course of the EU process, with a particular focus on the early role of the European Parliament, the work of the High-Level Expert Group on AI, and the Commission's proposal for a regulation of AI, and problematises effects of forward-looking responsibility for the agents who are attributed forward-looking responsibility and for the development of AI. Three issues were studied: ethics by design, Artificial General Intelligence (AGI), and competition. Overall, the analysis of the EU policy process on AI shows that competition is the primary value, and that the perspective is technical and focused on short-term concerns. As for ethics by design, the question of which values should be built into the technology and how this should be settled remained an issue after the distribution of responsibility to designers and other technical experts. AGI never really was an issue in this policy process, and it was gradually phased out. Competition within the EU process on AI is a norm that frames how responsibility is approached, and gives rise to potential value conflicts.

  • Supplementary Content
  • Cite Count Icon 2
  • 10.1159/000541168
Generative AI in Critical Care Nephrology: Applications and Future Prospects
  • Aug 30, 2024
  • Blood Purification
  • Wisit Cheungpasitporn + 3 more

Background: Generative artificial intelligence (AI) is rapidly transforming various aspects of healthcare, including critical care nephrology. Large language models (LLMs), a key technology in generative AI, show promise in enhancing patient care, streamlining workflows, and advancing research in this field. Summary: This review analyzes the current applications and future prospects of generative AI in critical care nephrology. Recent studies demonstrate the capabilities of LLMs in diagnostic accuracy, clinical reasoning, and continuous renal replacement therapy (CRRT) alarm troubleshooting. As we enter an era of multiagent models and automation, the integration of generative AI into critical care nephrology holds promise for improving patient care, optimizing clinical processes, and accelerating research. However, careful consideration of ethical implications and continued refinement of these technologies are essential for their responsible implementation in clinical practice. This review explores the current and potential applications of generative AI in nephrology, focusing on clinical decision support, patient education, research, and medical education. Additionally, we examine the challenges and limitations of AI implementation, such as privacy concerns, potential bias, and the necessity for human oversight. Key Messages: (i) LLMs have shown potential in enhancing diagnostic accuracy, clinical reasoning, and CRRT alarm troubleshooting in critical care nephrology. (ii) Generative AI offers promising applications in patient education, literature review, and academic writing within the field of nephrology. (iii) The integration of AI into electronic health records and clinical workflows presents both opportunities and challenges for improving patient care and research. (iv) Addressing ethical concerns, ensuring data privacy, and maintaining human oversight are crucial for the responsible implementation of AI in critical care nephrology.

  • Research Article
  • 10.1088/1742-6596/2078/1/011001
Preface
  • Nov 1, 2021
  • Journal of Physics: Conference Series

We are glad to introduce you that the 2021 3rd International Conference on Artificial Intelligence Technologies and Applications (ICAITA 2021) was successfully held on September 10-12, 2021. In light of worldwide travel restriction and the impact of COVID-19, ICAITA 2021 was carried out in the form of virtual conference to avoid personnel gatherings. Because most participants were still highly enthusiastic about participating in this conference, we chose to carry out ICAITA 2021 via online platform according to the original schedule instead of postponing it.ICAITA 2021 is to bring together innovative academics and industrial experts in the field of Artificial Intelligence Technologies and Applications to a common forum. The primary goal of the conference is to promote research and developmental activities in Artificial Intelligence Technologies and Applications and another goal is to promote scientific information interchange between researchers, developers, engineers, students, and practitioners working all around the world. The conference will be held every year to make it an ideal platform for people to share views and experiences in Artificial Intelligence Technologies and Applications and related areas.This scientific event brings together more than 100 national and international researchers in artificial intelligence technologies and applications. During the conference, the conference model was divided into three sessions, including oral presentations, keynote speeches, and online Q&A discussion. In the first part, some scholars, whose submissions were selected as the excellent papers, were given about 5-10 minutes to perform their oral presentations one by one. Then in the second part, keynote speakers were each allocated 30-45 minutes to hold their speeches.We were pleased to invite three distinguished experts to present their insightful speeches. Our first keynote speaker, Prof. Yau Kok Lim, from Sunway University, Malaysia. His research interests include Applied artificial intelligence, 5G networks, Cognitiveradio networks, Routing and clustering, Trust and reputation, Intelligent transportation system. And then we had Prof. Peter Sincak, from Technical University of Kosice, Slovakia. His research includes Artificial Intelligence and Intelligent Systems. Lastly, we were glad to invite Chinthaka Premachandra, from Shibaura Institute of Technology, Sri Lanka. His research interests include Artificial Intelligence, image processing and robotics. In the last part of the conference, all participants were invited to join in a WeChat group to discuss and explore the academic issues after the presentations. The online discussion was lasted for about 30-60 minutes. The first two parts were conducted via online collaboration tool, Zoom, while the online discussion was carried out through instant communication tool, WeChat. The online platform enabled all participants to join this grand academic event from their own home.We are glad to share with you that we still received lots of submissions from the conference during this special period. Hence, we selected a bunch of high-quality papers and compiled them into the proceedings after rigorously reviewed them. These papers feature following topics but are not limited to: Artificial Intelligence Applications & Technologies, Computing and the Mind, Foundations of Artificial Intelligence and other related topics. All the papers have been through rigorous review and process to meet the requirements of international publication standard.Lastly, we would like to express our sincere gratitude to the Chairman, the distinguished keynote speakers, as well as all the participants. We also want to thank the publisher for publishing the proceedings. May the readers could enjoy the gain some valuable knowledge from the proceedings. We are expecting more and more experts and scholars from all over the world to join this international event next year.The Committee of ICAITA 2021List of titles Committee member, General Conference Chair, Technical Program Committee Chair, Academic Committee Chair, Technical Program Committee Member, Academic Committee Member are available in this Pdf.

  • Research Article
  • Cite Count Icon 5
  • 10.2139/ssrn.3261254
The Perils &amp;amp; Promises of Artificial General Intelligence
  • Oct 5, 2018
  • SSRN Electronic Journal
  • Brian Seamus Haney

The Perils &amp;amp; Promises of Artificial General Intelligence

  • Research Article
  • Cite Count Icon 2
  • 10.34190/icair.4.1.3153
Generative AI and Educational (In)Equity
  • Dec 4, 2024
  • International Conference on AI Research
  • Sonja Gabriel

This paper examines the complex relationship between generative artificial intelligence (AI) and educational equity, analysing both the opportunities and challenges presented by these emerging technologies in educational contexts. The paper begins by establishing fundamental distinctions between educational equality and equity, emphasizing how various socioeconomic, cultural, and systemic factors contribute to persistent educational disparities. It then provides a comprehensive overview of generative AI technologies, particularly focusing on Large Language Models (LLMs) and their applications in educational settings. The analysis reveals several promising applications of generative AI for promoting educational equity, including enhanced accessibility features for students with disabilities, personalized learning experiences, and the creation of Open Educational Resources (OER). The paper highlights how AI-assisted tutoring, incorporating Socratic dialogue methods, and AI-generated feedback systems can provide valuable educational support, especially in resource-constrained environments. These technologies demonstrate potential in breaking down traditional barriers to education by offering multilingual support, adaptive learning materials, and immediate feedback mechanisms. However, the paper also addresses significant challenges and risks associated with implementing generative AI in education. These include concerns about digital divides, both in terms of access to technology and digital literacy skills, as well as the potential for AI systems to perpetuate existing biases. The research emphasizes the importance of thoughtful integration of AI technologies in educational settings, suggesting that the most effective approach may be a balanced combination of human instruction and AI-supported learning. By examining these various aspects, the paper contributes to ongoing discussions about how to harness generative AI's potential while ensuring its implementation promotes, rather than hinders, educational equity. The findings have significant implications for educators, policymakers, and educational institutions working to create more equitable learning environments in an increasingly technology-driven world.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.arthro.2024.12.001
Generative Versus Nongenerative Artificial Intelligence.
  • Mar 1, 2025
  • Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association
  • Sayyida S Hasan + 3 more

Generative Versus Nongenerative Artificial Intelligence.

  • Research Article
  • Cite Count Icon 3
  • 10.1007/s00146-024-02087-8
Strong and weak AI narratives: an analytical framework
  • Oct 10, 2024
  • AI & SOCIETY
  • Paolo Bory + 2 more

The current debate on artificial intelligence (AI) tends to associate AI imaginaries with the vision of a future technology capable of emulating or surpassing human intelligence. This article advocates for a more nuanced analysis of AI imaginaries, distinguishing “strong AI narratives,” i.e., narratives that envision futurable AI technologies that are virtually indistinguishable from humans, from "weak" AI narratives, i.e., narratives that discuss and make sense of the functioning and implications of existing AI technologies. Drawing on the academic literature on AI narratives and imaginaries and examining examples drawn from the debate on Large Language Models and public policy, we underscore the critical role and interplay of weak and strong AI across public/private and fictional/non-fictional discourses. The resulting analytical framework aims to empower approaches that are more sensitive to the heterogeneity of AI narratives while also advocating normalising AI narratives, i.e., positioning weak AI narratives more firmly at the center stage of public debates about emerging technologies.

  • Research Article
  • Cite Count Icon 2
  • 10.3205/zma001702
Legal aspects of generative artificial intelligence and large language models in examinations and theses.
  • Jan 1, 2024
  • GMS journal for medical education
  • Maren März + 3 more

The high performance of generative artificial intelligence (AI) and large language models (LLM) in examination contexts has triggered an intense debate about their applications, effects and risks. What legal aspects need to be considered when using LLM in teaching and assessment? What possibilities do language models offer? Statutes and laws are used to assess the use of LLM: - University statutes, state higher education laws, licensing regulations for doctors - Copyright Act (UrhG) - General Data Protection Regulation (DGPR) - AI Regulation (EU AI Act) LLM and AI offer opportunities but require clear university frameworks. These should define legitimate uses and areas where use is prohibited. Cheating and plagiarism violate good scientific practice and copyright laws. Cheating is difficult to detect. Plagiarism by AI is possible. Users of the products are responsible. LLM are effective tools for generating exam questions. Nevertheless, careful review is necessary as even apparently high-quality products may contain errors. However, the risk of copyright infringement with AI-generated exam questions is low, as copyright law allows up to 15% of protected works to be used for teaching and exams. The grading of exam content is subject to higher education laws and regulations and the GDPR. Exclusively computer-based assessment without human review is not permitted. For high-risk applications in education, the EU's AI Regulation will apply in the future. When dealing with LLM in assessments, evaluation criteria for existing assessments can be adapted, as can assessment programmes, e.g. to reduce the motivation to cheat. LLM can also become the subject of the examination themselves. Teachers should undergo further training in AI and consider LLM as an addition.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s41669-025-00580-4
Using Generative Artificial Intelligence in Health Economics and Outcomes Research: A Primer on Techniques and Breakthroughs.
  • Apr 29, 2025
  • PharmacoEconomics - open
  • Tim Reason + 7 more

The emergence of generative artificial intelligence (GenAI) offers the potential to enhance health economics and outcomes research (HEOR) by streamlining traditionally time-consuming and labour-intensive tasks, such as literature reviews, data extraction, and economic modelling. To effectively navigate this evolving landscape, health economists need a foundational understanding of how GenAI can complement their work. This primer aims to introduce health economists to the essentials of using GenAI tools, particularly large language models (LLMs), in HEOR projects. For health economists new to GenAI technologies, chatbot interfaces like ChatGPT offer an accessible way to explore the potential of LLMs. For more complex projects, knowledge of application programming interfaces (APIs), which provide scalability and integration capabilities, and prompt engineering strategies, such as few-shot and chain-of-thought prompting, is necessary to ensure accurate and efficient data analysis, enhance model performance, and tailor outputs to specific HEOR needs. Retrieval-augmented generation (RAG) can further improve LLM performance by incorporating current external information. LLMs have significant potential in many common HEOR tasks, such as summarising medical literature, extracting structured data, drafting report sections, generating statistical code, answering specific questions, and reviewing materials to enhance quality. However, health economists must also be aware of ongoing limitations and challenges, such as the propensity of LLMs to produce inaccurate information ('hallucinate'), security concerns, issues with reproducibility, and the risk of bias. Implementing LLMs in HEOR requires robust security protocols to handle sensitive data in compliance with the European Union's General Data Protection Regulation (GDPR) and the United States' Health Insurance Portability and Accountability Act (HIPAA). Deployment options such as local hosting, secure API use, or cloud-hosted open-source models offer varying levels of control and cost, each with unique trade-offs in security, accessibility, and technical demands. Reproducibility and transparency also pose unique challenges. To ensure the credibility of LLM-generated content, explicit declarations of the model version, prompting techniques, and benchmarks against established standards are recommended. Given the 'black box' nature of LLMs, a clear reporting structure is essential to maintain transparency and validate outputs, enabling stakeholders to assess the reliability and accuracy of LLM-generated HEOR analyses. The ethical implications of using artificial intelligence (AI) in HEOR, including LLMs, are complex and multifaceted, requiring careful assessment of each use case to determine the necessary level of ethical scrutiny and transparency. Health economists must balance the potential benefits of AI adoption against the risks of maintaining current practices, while also considering issues such as accountability, bias, intellectual property, and the broader impact on the healthcare system. As LLMs and AI technologies advance, their potential role in HEOR will become increasingly evident. Key areas of promise include creating dynamic, continuously updated HEOR materials, providing patients with more accessible information, and enhancing analytics for faster access to medicines. To maximise these benefits, health economists must understand and address challenges such as data ownership and bias. The coming years will be critical for establishing best practices for GenAI in HEOR. This primer encourages health economists to adopt GenAI responsibly, balancing innovation with scientific rigor and ethical integrity to improve healthcare insights and decision-making.

  • Research Article
  • 10.28945/5354
Is Knowledge Management (Finally) Extractive? – Fuller’s Argument Revisited in the Age of AI
  • Jan 1, 2024
  • Interdisciplinary Journal of Information, Knowledge, and Management
  • Norman A Mooradian

Aim/Purpose: The rise of modern artificial intelligence (AI), in particular, machine learning (ML), has provided new opportunities and directions for knowledge management (KM). A central question for the future of KM is whether it will be dominated by an automation strategy that replaces knowledge work or whether it will support a knowledge-enablement strategy that enhances knowledge work and uplifts knowledge workers. This paper addresses this question by re-examining and updating a critical argument against KM by the sociologist of science Steve Fuller (2002), who held that KM was extractive and exploitative from its origins. Background: This paper re-examines Fuller’s argument in light of current developments in artificial intelligence and knowledge management technologies. It reviews Fuller’s arguments in its original context wherein expert systems and knowledge engineering were influential paradigms in KM, and it then considers how the arguments put forward are given new life in light of current developments in AI and efforts to incorporate AI in the KM technical stack. The paper shows that conceptions of tacit knowledge play a key role in answering the question of whether an automating or enabling strategy will dominate. It shows that a better understanding of tacit knowledge, as reflected in more recent literature, supports an enabling vision. Methodology: The paper uses a conceptual analysis methodology grounded in epistemology and knowledge studies. It reviews a set of historically important works in the field of knowledge management and identifies and analyzes their core concepts and conceptual structure. Contribution: The paper shows that KM has had a faulty conception of tacit knowledge from its origins and that this conception lends credibility to an extractive vision supportive of replacement automation strategies. The paper then shows that recent scholarship on tacit knowledge and related forms of reasoning, in particular, abduction, provide a more theoretically robust conception of tacit knowledge that supports the centrality of human knowledge and knowledge workers against replacement automation strategies. The paper provides new insights into tacit knowledge and human reasoning vis-à-vis knowledge work. It lays the foundation for KM as a field with an independent, ethically defensible approach to technology-based business strategies that can leverage AI without becoming a merely supporting field for AI. Findings: Fuller’s argument is forceful when updated with examples from current AI technologies such as deep learning (DL) (e.g., image recognition algorithms) and large language models (LLMs) such as ChatGPT. Fuller’s view that KM presupposed a specific epistemology in which knowledge can be extracted into embodied (computerized) but disembedded (decontextualized) information applies to current forms of AI, such as machine learning, as much as it does to expert systems. Fuller’s concept of expertise is narrower than necessary for the context of KM but can be expanded to other forms of knowledge work. His account of the social dynamics of expertise as professionalism can be expanded as well and fits more plausibly in corporate contexts. The concept of tacit knowledge that has dominated the KM literature from its origins is overly simplistic and outdated. As such, it supports an extractive view of KM. More recent scholarship on tacit knowledge shows it is a complex and variegated concept. In particular, current work on tacit knowledge is developing a more theoretically robust and detailed conception of human knowledge that shows its centrality in organizations as a driver of innovation and higher-order thinking. These new understandings of tacit knowledge support a non-extractive, human enabling view of KM in relation to AI. Recommendations for Practitioners: Practitioners can use the findings of the paper to consider ways to implement KM technologies in ways that do not neglect the importance of tacit knowledge in automation projects (which neglect often leads to failure). They should also consider how to enhance and fully leverage tacit knowledge through AI technologies and augment human knowledge. Recommendation for Researchers: Researchers can use these findings as a conceptual framework in research concerning the impact of AI on knowledge work. In particular, the distinction between replacement and enabling technologies, and the analysis of tacit knowledge as a structural concept, can be used to categorize and analyze AI technologies relative to KM research objectives. Impact on Society: The potential of AI on employment in the knowledge economy is a major issue in the ethics of AI literature and is widely recognized in the popular press as one of the pressing societal risks created by AI and specific types such as generative AI. This paper shows that KM, as a field of research and practice, does not need to and should not add to the risks created by automation-replacement strategies. Rather, KM has the conceptual resources to pursue a (human) knowledge enablement approach that can stand as a viable alternative to the automation-replacement vision. Future Research: The findings of the paper suggest a number of research trajectories. They include: Further study of tacit knowledge and its underlying cognitive mechanisms and structures in relation to knowledge work and KM objectives. Research into different types of knowledge work and knowledge processes and the role that tacit and explicit knowledge play. Research into the relation between KM and automation in terms of KM’s history and current technical developments. Research into how AI arguments knowledge works and how KM can provide an enabling framework.

  • Research Article
  • 10.34190/icair.4.1.3221
Bridging the Gap: Practical Challenges and Strategic Imperatives in Adopting Gen AI
  • Dec 4, 2024
  • International Conference on AI Research
  • Andrea Di Vetta

In today's rapidly evolving business landscape, Artificial Intelligence (AI) stands as the proverbial 'elephant in the room,' profoundly shaping diverse sectors and contexts. While debates rage among policymakers, practitioners, and politicians about regulating AI's widespread use, it's undeniable that AI represents a long-awaited digital technology poised to revolutionize organizational performance. Amidst the post-COVID era, the allure of AI has intensified, yet the critical question lingers: Can firms effectively harness AI and other technologies, such as Generative AI and Large Language Models (LLMs), to enhance their existing systems? Through a systematic literature review, a clear correlation between firms’ implementation of AI, including the cutting-edge Generative AI, and their ability to adapt to changing market dynamics, drive operational excellence, and unlock new avenues for growth are explored. Moreover, key drivers of AI and digital adoption, such as the imperative for data-driven decision-making, the quest for customer-centricity, and the drive for sustainable business practices are explored. Our research not only highlights the transformative potential of AI and digital technologies but also provides actionable insights for business leaders navigating the complexities of technology adoption. By understanding the motivations, challenges, and strategic imperatives driving firms' technology choices, including the integration of Generative AI and LLMs, organizations can chart a path towards sustainable growth and competitive advantage in the digital age. This study underscores the revolutionary impact of Generative AI in digital transformation, offering a comprehensive understanding of their role in shaping the future of business.

More from: eBioMedicine
  • New
  • Research Article
  • 10.1016/j.ebiom.2025.106022
A computational genetic- and transcriptomics-based study nominates drug repurposing candidates for the treatment of chronic pain.
  • Nov 7, 2025
  • EBioMedicine
  • Alanna C Cote + 4 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.105997
Treatment in acute HIV infection only temporarily preserves monocyte function: a comparative cohort study in adult males.
  • Nov 7, 2025
  • EBioMedicine
  • Killian E Vlaming + 13 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.106009
Long lasting complement neutralisation by RAY121, an engineered anti-C1s antibody with C1q displacement function.
  • Nov 7, 2025
  • EBioMedicine
  • Adrian W S Ho + 27 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.106010
AI-driven breath biopsy from a case-control study assists in the early detection of paediatric brain tumours.
  • Nov 7, 2025
  • EBioMedicine
  • Shangzhewen Li + 6 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.106013
Genetic diagnostic yield by MRI pattern in children with cerebral palsy: a population-based study.
  • Nov 6, 2025
  • EBioMedicine
  • Jesia G Berry + 27 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.105998
Characterisation of phosphate transport in epididymis and prostate with possible relevance for semen quality.
  • Nov 2, 2025
  • EBioMedicine
  • Zhihui Cui + 11 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.105994
Drug toxicity prediction based on genotype-phenotype differences between preclinical models and humans.
  • Nov 1, 2025
  • EBioMedicine
  • Minhyuk Park + 3 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.106002
Blimp-1 benefits gut-homing regulatory T cells by maintaining migration/suppressive function in autoimmune diabetes-prone mice.
  • Nov 1, 2025
  • EBioMedicine
  • Yi-Wen Tsai + 8 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.106011
Prospective evaluation of circulating plasma thyroid hormones concentrations and breast cancer risk in the EPIC cohort.
  • Nov 1, 2025
  • EBioMedicine
  • Mathilde His + 26 more

  • New
  • Research Article
  • 10.1016/j.ebiom.2025.105986
Imaging brain development in a KCNQ2-developmental and epileptic encephalopathy mouse model: identifying early biomarkers for functional and structural brain changes
  • Nov 1, 2025
  • eBioMedicine
  • Charissa Millevert + 14 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon