Critical Phenomenology of Prompting in Artificial Intelligence
This paper analyzes the philosophy of prompting as a tool within the context of the rise of Artificial Intelligence (AI), particularly in large language models (LLMs). The topic is justified bythe need to understand the prompt as a mediating space between human intentionality, language, and the sociopolitical structures that shape interactions with these technologies. The centralobjective is to examine how prompting reflects ethical, ontological, and epistemological tensionsthat arise in the construction of meaning within AI systems. Methodologically, the study adopts a critical-phenomenological approach, combining first-person experiences (user) with practical experimentation of prompts in different scenarios. The results demonstrate that the prompt is not merely a technical instruction but a discursive practice, where human decisions, such asthe configuration of “parameters” (e. g., temperature and Top P), directly influence the outputsgenerated by AI systems. While these decisions appear technical, they carry significant ethical and epistemological implications that demand critical examination. The study concludes that it is essential to adopt an interdisciplinary approach that integrates technical development withphilosophical reflection. This approach would foster an ethical, conscious, and responsible use ofAI while recognizing the central role of humans in interactions with these emerging technologies.
- Research Article
8
- 10.1287/ijds.2023.0007
- Apr 1, 2023
- INFORMS Journal on Data Science
How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
- Discussion
6
- 10.1016/j.ebiom.2023.104672
- Jul 1, 2023
- eBioMedicine
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
- Discussion
10
- 10.1016/s2589-7500(22)00094-2
- Jun 21, 2022
- The Lancet Digital Health
Artificial intelligence to complement rather than replace radiologists in breast screening
- Research Article
2
- 10.46610/rtaia.2024.v03i01.001
- Mar 26, 2024
- Research & Review: Machine Learning and Cloud Computing
As Artificial Intelligence (AI) systems become more widespread, there is a growing need for transparency to ensure human understanding and oversight. This is where Explainable AI (XAI) comes in to make AI systems more transparent and interpretable. However, developing adequate explanations is still an open research problem. Human-Computer Interaction (HCI) is significant in designing interfaces for explainable AI. This article reviews the HCI techniques that can be used for solvable AI systems. The literature was explored with a focus on papers at the intersection of HCI and XAI. Essential techniques include interactive visualizations, natural language explanations, conversational agents, mixed-initiative systems, and model introspection methods while Explainable AI presents opportunities to improve system transparency, it also comes with risks, especially if the explanations need to be designed carefully. To ensure that explanations are tailored for diverse users, contexts, and AI applications, HCI principles and participatory design approaches can be utilized. Therefore, this article concludes with recommendations for developing human-centred XAI systems, which can be achieved through interdisciplinary collaboration between HCI and AI. As Artificial Intelligence (AI) systems become more common in our daily lives, the need for transparency in these systems is becoming increasingly important. Ensuring that humans clearly understand how AI systems work and can oversee their functioning is crucial. This is where the concept of Explainable AI (XAI) comes in to make AI systems more transparent and interpretable. However, developing adequate explanations for AI systems is still an open research problem. In this context, Human-Computer Interaction (HCI) is significant in designing interfaces for explainable AI. By integrating HCI principles, we can create systems humans understand and operate more efficiently. This article reviews the HCI techniques that can be used for solvable AI systems. The literature was explored with a focus on papers at the intersection of HCI and XAI. The essential methods identified include interactive visualizations, natural language explanations, conversational agents, mixed-initiative systems, and model introspection methods. Each of these techniques has unique advantages and can be used to provide explanations for different types of AI systems. While Explainable AI presents opportunities to improve system transparency, it also comes with risks, especially if the explanations need to be designed carefully. There is a risk of oversimplification, leading to misunderstanding or mistrust of the AI system. It is essential to employ HCI principles and participatory design approaches to ensure that explanations are tailored for diverse users, contexts, and AI applications. By developing human-centred XAI systems, we can ensure that AI systems are transparent, interpretable, and trustworthy. This can be achieved through interdisciplinary collaboration between HCI and AI. The recommendations in this article provide a starting point for designing such systems. In essence, XAI presents a significant opportunity to improve the transparency of AI systems, but it requires careful design and implementation to be effective.
- Research Article
89
- 10.1016/j.isci.2020.101515
- Aug 29, 2020
- iScience
SummaryThe recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie's art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of the AI system contributed to the artwork's success. Here, we identify natural heterogeneity in the extent to which different people perceive AI as anthropomorphic. We find that differences in the perception of AI anthropomorphicity are associated with different allocations of responsibility to the AI system and credit to different stakeholders involved in art production. We then show that perceptions of AI anthropomorphicity can be manipulated by changing the language used to talk about AI—as a tool versus agent—with consequences for artists and AI practitioners. Our findings shed light on what is at stake when we anthropomorphize AI systems and offer an empirical lens to reason about how to allocate credit and responsibility to human stakeholders.
- Research Article
31
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).
- Research Article
41
- 10.1016/j.fertnstert.2020.10.040
- Nov 1, 2020
- Fertility and Sterility
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
- Research Article
- 10.1093/bjrai/ubaf010
- Aug 13, 2025
- BJR|Artificial Intelligence
Natural Language Processing (NLP) is a key technique for developing Medical Artificial Intelligence (AI) systems that leverage Electronic Health Record (EHR) data to build diagnostic and prognostic models. NLP enables the conversion of unstructured clinical text into structured data that can be fed into AI algorithms. The emergence of transformer architecture and large language models (LLMs) has led to advances in NLP for various healthcare tasks, such as entity recognition, relation extraction, sentence similarity, text summarization, and question-answering. In this article, we review the major technical innovations that underpin modern NLP models and present state-of-the-art NLP applications that employ LLMs in radiation oncology research. However, it is crucial to recognize that LLMs are prone to hallucinations, biases, and ethical violations, which necessitate rigorous evaluation and validation prior to clinical deployment. As such, we propose a comprehensive framework for assessing the NLP models based on their purpose and clinical fit, technical performance, bias and trust, legal and ethical implications, and quality assurance prior to implementation in clinical radiation oncology. Our article aims to provide guidance and insights for researchers and clinicians who are interested in developing and using NLP models in clinical radiation oncology. Natural Language Processing (NLP) is a key technique for developing Medical Artificial Intelligence (AI) systems that leverage Electronic Health Record (EHR) data to build diagnostic and prognostic models. NLP enables the conversion of unstructured clinical text into structured data that can be fed into AI algorithms. The emergence of transformer architecture and large language models (LLMs) has led to advances in NLP for various healthcare tasks, such as entity recognition, relation extraction, sentence similarity, text summarization, and question-answering. In this article, we review the major technical innovations that underpin modern NLP models and present state-of-the-art NLP applications that employ LLMs in radiation oncology research. However, it is crucial to recognize that LLMs are prone to hallucinations, biases, and ethical violations, which necessitate rigorous evaluation and validation prior to clinical deployment. As such, we propose a comprehensive framework for assessing the NLP models based on their purpose and clinical fit, technical performance, bias and trust, legal and ethical implications, and quality assurance prior to implementation in clinical radiation oncology. Our article aims to provide guidance and insights for researchers and clinicians who are interested in developing and using NLP models in clinical radiation oncology.
- News Article
13
- 10.1016/s2589-7500(19)30011-1
- May 1, 2019
- The Lancet Digital Health
Is the future of medical diagnosis in computer algorithms?
- Preprint Article
- 10.2196/preprints.78417
- Jun 2, 2025
BACKGROUND Artificial intelligence (AI), particularly large language models (LLMs), is increasingly used in digital health to support patient engagement and behavior change. One novel application is the delivery of motivational interviewing (MI), an evidence-based, patient-centered counseling technique designed to enhance motivation and resolve ambivalence around health behaviors. AI tools, including chatbots and virtual agents, have shown promise in simulating human-like dialogue and applying MI techniques at scale. However, the extent to which AI systems can faithfully replicate MI principles and generate meaningful behavioral outcomes remains unclear. OBJECTIVE This scoping review aimed to assess the scope, characteristics, and findings of existing studies that evaluate AI systems delivering motivational interviewing directly to patients. Specifically, we examined the feasibility of these systems, their fidelity to MI principles, and any reported outcomes related to health behavior change. METHODS We conducted a comprehensive search of five electronic databases (PubMed, Embase, Scopus, Web of Science, and Cochrane Library) for studies published between January 1, 2018, and February 25, 2025. Eligible studies included any empirical design that used AI to perform MI with patients targeting a specific health behavior (e.g., smoking cessation, vaccine uptake). We excluded studies using AI solely for training clinicians in MI. Three independent reviewers conducted screening and data extraction. Extracted variables included study design, AI modality and type, health behavior focus, MI fidelity assessment, and reported outcomes. Data were synthesized narratively to map the evidence landscape. RESULTS Out of 1001 records identified, 8 studies met the inclusion criteria. Most were exploratory feasibility or pilot studies; only one was a randomized controlled trial. AI modalities included rule-based chatbots, large language models (such as GPT-4), and virtual reality conversational agents. Targeted behaviors included smoking cessation, substance use reduction, vaccine hesitancy, type 2 diabetes self-management, and opioid use during pregnancy. Across studies, AI-delivered MI was rated as usable and acceptable. Patients frequently described AI systems as “judgment-free” and supportive, which enhanced openness and engagement, particularly in stigmatized contexts. Expert evaluations of MI fidelity reported high alignment with MI principles in most cases. However, participants also noted a lack of emotional depth and limited perceived empathy. One study improved these perceptions by adjusting conversational pacing and content complexity. Only one study evaluated behavioral outcomes and found no statistically significant changes. CONCLUSIONS AI systems, particularly those powered by LLMs, show promise in delivering motivational interviewing that is scalable, accessible, and perceived as nonjudgmental. While AI can replicate many structural aspects of MI and foster engagement, current evidence on its efficacy in driving behavior change is limited. More rigorous studies, including randomized controlled trials with diverse populations, are needed to assess long-term outcomes and to refine AI-human hybrid models that balance efficiency with relational depth.
- Research Article
29
- 10.1016/j.caeai.2023.100177
- Jan 1, 2023
- Computers and Education: Artificial Intelligence
Assessing student errors in experimentation using artificial intelligence and large language models: A comparative study with human raters
- Research Article
- 10.1097/as9.0000000000000271
- Mar 1, 2023
- Annals of surgery open : perspectives of surgical history, education, and clinical approaches
We are writing to bring attention to the limitations of using artificial intelligence (AI) in surgery. While AI has shown great potential in various fields, including medical imaging and diagnostics, its use in surgical procedures is still in its infancy and has significant limitations. First, AI algorithms require large amounts of data to be trained and tested, which is often not available in the surgical setting. This means that AI systems may not be able to adapt to the unique and complex situations that arise during surgery. Second, the accuracy and reliability of AI systems in surgery is still uncertain. Despite advances in technology, AI systems are still prone to errors and can miss important details that may have significant consequences during surgery. Finally, AI systems are not able to replace the critical thinking and decision making skills of trained surgeons. Surgeons need to be able to analyze a wide range of factors and make split-second decisions that AI systems may not be able to replicate. Overall, while AI has the potential to assist surgeons, its limitations should be carefully considered before implementing it in the surgical setting. Further research and development is needed to improve the accuracy and reliability of AI systems in surgery. Sincerely, Martin G. Tolsgaard and Lawrence Grierson P.S. We did not write any of this. An AI did and this was (close to) a Turing test. We typed the following into the OpenAI GTP-3 chatbot, which has recently been published:1write a letter about the limitations of AI in surgery in 200 words for a surgical journal from 2 scientists. Would you have noticed? Large language models such as GPT-3 can write manuscripts eloquently as seen above and even perform reasonably well on USMLE exams.2 Examples of super-human performance in medical imaging diagnosis have already been published for several years.3 However, now these models are beginning to carve further into human domains of expertise by imitating clinical reasoning, surgical expertise, and academic writing—something that we consider core to what makes us different from AI. This leads us to question the nature and understanding of competence. How will our understanding of what it means to write well academically or be an expert surgeon change when an AI sometimes surpasses our own performance? Narrowly focusing on limitations or benefits of AI may not advance our understanding of what surgeons should be able to do in the future and how. Instead, we should consider exploring when and under what circumstances human-AI collaboration works, for whom and why. We need to turn the scientific discourse away from focusing on how AI can replace clinicians and instead explore how best to support their learning and performances through collective competence. Yet, this requires us to take the science of learning and clinical reasoning into account, which is rarely considered in existing AI research.4
- Research Article
3
- 10.1111/risa.14353
- Jun 30, 2024
- Risk analysis : an official publication of the Society for Risk Analysis
This article presents a risk analysis of large language models (LLMs), a type of "generative" artificial intelligence (AI) system that produces text, commonly in response to textual inputs from human users. The article is specifically focused on the risk of LLMs causing an extreme catastrophe in which they do something akin to taking over the world and killing everyone. The possibility of LLM takeover catastrophe has been a major point of public discussion since the recent release of remarkably capable LLMs such as ChatGPT and GPT-4. This arguably marks the first time when actual AI systems (and not hypothetical future systems) have sparked concern about takeover catastrophe. The article's analysis compares (A) characteristics of AI systems that may be needed for takeover, as identified in prior theoretical literature on AI takeover risk, with (B) characteristics observed in current LLMs. This comparison reveals that the capabilities of current LLMs appear to fall well short of what may be needed for takeover catastrophe. Future LLMs may be similarly incapable due to fundamental limitations of deep learning algorithms. However, divided expert opinion on deep learning and surprise capabilities found in current LLMs suggests some risk of takeover catastrophe from future LLMs. LLM governance should monitor for changes in takeover characteristics and be prepared to proceed more aggressively if warning signs emerge. Unless and until such signs emerge, more aggressive governance measures may be unwarranted.
- Research Article
- 10.70235/allora.0x20015
- Mar 24, 2025
- Allora Decentralized Intelligence
Artificial intelligence (AI) systems powered by large language models have become increasingly prevalent in modern society, enabling a wide range of applications through natural language interaction. As AI agents proliferate in our daily lives, their generic and uniform expressiveness presents a significant limitation to their appeal and adoption. Personality expression represents a key prerequisite for creating more human-like and distinctive AI systems. We show that AI models can express deterministic and consistent personalities when instructed using established psychological frameworks, with varying degrees of accuracy depending on model capabilities. We find that more advanced models like GPT-4o and o1 demonstrate the highest accuracy in expressing specified personalities across both Big Five and Myers-Briggs assessments, and further analysis suggests that personality expression emerges from a combination of intelligence and reasoning capabilities. Our results reveal that personality expression operates through holistic reasoning rather than question-by-question optimization, with response-scale metrics showing higher variance than test-scale metrics. Furthermore, we find that model fine-tuning affects communication style independently of personality expression accuracy. These findings establish a foundation for creating AI agents with diverse and consistent personalities, which could significantly enhance human-AI interaction across applications from education to healthcare, while additionally enabling a broader range of more unique AI agents. The ability to quantitatively assess and implement personality expression in AI systems opens new avenues for research into more relatable, trustworthy, and ethically designed AI.
- Research Article
80
- 10.1007/s43681-023-00289-2
- May 30, 2023
- AI and Ethics
Large language models (LLMs) represent a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which display emergent capabilities and are adaptable to a wide range of downstream tasks. In this article, we address that gap by outlining a novel blueprint for how to audit LLMs. Specifically, we propose a three-layered approach, whereby governance audits (of technology providers that design and disseminate LLMs), model audits (of LLMs after pre-training but prior to their release), and application audits (of applications based on LLMs) complement and inform each other. We show how audits, when conducted in a structured and coordinated manner on all three levels, can be a feasible and effective mechanism for identifying and managing some of the ethical and social risks posed by LLMs. However, it is important to remain realistic about what auditing can reasonably be expected to achieve. Therefore, we discuss the limitations not only of our three-layered approach but also of the prospect of auditing LLMs at all. Ultimately, this article seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyse and evaluate LLMs from technical, ethical, and legal perspectives.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.