Effect of AI chatbot–assisted case-based learning on clinical reasoning in occupational therapy students: a post-test only randomized controlled trial in Parkinson’s disease education
ABSTRACT Clinical reasoning is vital yet difficult to teach in occupational therapy education. AI chatbots may support learning, but their effect on reasoning is unclear. To determine whether chatbot-assisted case-based learning enhances occupational therapy students' cognitive, affective, and psychomotor outcomes versus traditional instructional methods. In a post-test-only randomized controlled, mixed-methods trial, 25 students (age 20–23) in a neurological rehabilitation course were allocated to a chatbot (n = 11) or classic (n = 14) group. Teams analyzed a Parkinson's disease case and drafted intervention plans; the chatbot group interacted with an AI agent simulating the client, and the classic group used conventional resources. Outcomes were a six-item written exam, analyzed with ANCOVA adjusting for Grade Point Average, and qualitative analysis of chatbot queries. Groups did not differ on total or domain-specific exam scores (p > .05). Qualitative analysis showed that chatbot queries overwhelmingly sought factual clarifications and procedural guidance, indicating that students treated the AI chiefly as an information source rather than a prompt for ethical or reflective reasoning. Chatbot-assisted learning yielded performance comparable to traditional methods. While useful for factual learning, unstructured chatbot use did not foster higher-order reasoning. Structured guidance and longitudinal research are needed to support deeper engagement and examine sustained affective benefits. Clinical Trial: NCT07045077.
- Research Article
- 10.1080/17538157.2025.2589195
- Nov 26, 2025
- Informatics for Health and Social Care
Parents of children admitted to the PICU face an overwhelming informational landscape, necessitating accessible, patient-specific information. Large Language Models (LLMs) powering AI chatbots offer a promising solution for simplifying complex medical information. We aimed to characterize parental online health information-seeking (OHIS) behaviors and attitudes toward AI chatbots by conducting a cross-sectional survey of 139 English-speaking parents of children admitted to a large academic PICU between April-August 2024. We assessed OHIS behaviors, knowledge of and experience with AI chatbots, and attitudes regarding their potential healthcare utility. Most parents (87%) engaged in OHIS using search engines (86%). Parents with higher income and education sought information more frequently (OR 3.3, 95% CI 1.8–6.2; OR 2.9, 95% CI 1.5–5.7, respectively); those with higher education were less satisfied with online resources (OR 0.5, 95% CI 0.25–0.97). Parents expressed openness toward AI chatbots in healthcare applications (median 4/6). Significant socioeconomic disparities in current AI chatbot use favored male (OR 2.5, 95% CI 1.1–6.0) and higher income (OR 3.8, 95% CI 1.1–12.7) parents. Parents of critically ill children show high OHIS behaviors and positive attitudes toward AI chatbots. Addressing significant socioeconomic disparities in AI chatbot use is crucial for developing equitable implementation strategies in the PICU.
- Research Article
- 10.1007/s11606-025-10145-0
- Jan 21, 2026
- Journal of general internal medicine
AI chatbots are proliferating in healthcare systems. It is essential to explore how physicians use these tools in order to understand their influence on clinical care and outcomes. Our goal was to understand how physicians conceive of and incorporate AI into clinical decision-making. We conducted semistructured interviews with generalist physicians from inpatient and outpatient settings in the USA. Prior to the interview, participants were asked to use an AI chatbot, ChatGPT-4, to complete three mock clinical cases. Physicians were interviewed regarding their perspectives on the AI chatbot. Interviews were analyzed using reflexive thematic analysis and conducted via video conference meeting, where they were recorded and transcribed. We interviewed 22 physicians with 2-32years of experience (median = 3years). We identified a central organizing concept of "physician as filter" defining how physicians used the AI chatbot. This idea was composed of four themes. Theme 1: Physicians perceive clinical decision-making as a problem-solving activity, applying internally held knowledge to externally gathered information. Theme 2: AI chatbot systems are part of a continuum of information resources. Theme 3: Trust in the AI chatbot's outputs depends on the user's own clinical knowledge. Theme 4: Clinical decision-making is understood as the personalization of clinical knowledge and context. AI chatbots may help physicians with formulating a clinical problem and generating a hypothesis by expanding their repertoire of possible cases. Despite the "wealth of information" provided by AI chatbots, physician trust in the outputs is limited, especially when AI chatbots do not provide references. Physician users described filtering chatbot outputs, using their own clinical knowledge and experience, to determine what information is relevant. In describing how providers perceive AI chatbots, we hope to guide further investigation of physician AI interaction and chatbot development that facilitates improved clinical reasoning.
- Research Article
5
- 10.3389/fpsyg.2025.1453072
- Mar 24, 2025
- Frontiers in psychology
The emergence of AI chatbot products has ushered in a new era of human-AI interaction, yet scholars and practitioners have expressed concerns about their use due to potential addictive and adverse effects. Currently, the understanding of problematic AI chatbot use (PACU) remains incomplete and inconclusive. Despite previous findings that indicate negative outcomes associated with the use of AI products, limited studies have explored the underlying factors that drive the complex process leading to the formation of PACU. Furthermore, while existing literature highlights how personal traits influences problematic IT use via evoked psychological states, it largely overlooks that the positive psychological experience may also have a potential influence on problematic outcomes. Incorporating flow experience into the compensatory internet use theory, this study presents a multiple mediation model to investigate how social anxiety, escapism, and AI chatbot flow influence the relationship between self-esteem and PACU. We examine the model using Partial Least Squares Structural Equation Modeling (PLS-SEM) with cross-sectional data collected from 563 online users who have engaged with AI chatbots. Our findings indicate that users with low self-esteem are more likely to conduct problematic behavior when using AI chatbots. This relationship can be mediated by social anxiety, escapism and AI chatbot flow. This study sheds light on how self-esteem negatively affects PACU, unraveling the underlying psychological processes experienced by users with low self-esteem in their interactions with AI chatbots. Also, we provide practical insights for online users and practitioners to mitigate the potential negative impacts of AI product usage.
- Research Article
- 10.1186/s12889-025-25386-1
- Nov 17, 2025
- BMC Public Health
BackgroundIn conservative societies such as Lebanon and the broader Middle East and North Africa region, gynecological and intimate health issues are heavily stigmatized, limiting young women’s access to care due to fear of judgment, privacy concerns, and cultural taboos. These barriers often result in delayed diagnoses and poorer health outcomes. Large Language Models, such as ChatGPT and Gemini, have emerged as digital tools offering anonymity, reduced embarrassment, and accessibility, potentially serving as discreet “pocket doctors” for sensitive health concerns. However, little is known about young women’s perceptions and use of artificial intelligence for intimate health topics in such contexts.MethodsA cross-sectional quantitative study surveyed 525 female university students in Lebanon (ages 18–35) to assess their use, perceptions, drivers, and barriers related to artificial intelligence chatbots for intimate and general health concerns.ResultsThe study included 525 young Lebanese women with a mean age of 22.44 ± 3.74 years. Regarding AI chatbot use, the most common intimate health topics included menstrual problems (43.8%) and polycystic ovary syndrome (33.3%), while physical fitness (59.8%) and mental health (48.8%) were the predominant general health topics. The primary barriers to chatbot use were concerns about accuracy (85.5%) and lack of physical examination (85.3%), while key motivators included saving time (71.0%) and avoiding embarrassment (43.4%). Younger women were more likely to use artificial intelligence tools to avoid judgment and cost. Cluster analysis revealed distinct user profiles, including a super-user group with intensive engagement across sensitive health domains.ConclusionLarge language models serve as accessible, non-judgmental digital confidants for young Lebanese women’s intimate health concerns, addressing socio-cultural stigma and healthcare system limitations. While promising, they should complement, not replace, professional care due to limitations in clinical reasoning, physical examination, and privacy concerns. Integrating artificial intelligence chatbots thoughtfully may enhance health information access and reduce barriers in stigmatized settings.Supplementary InformationThe online version contains supplementary material available at 10.1186/s12889-025-25386-1.
- Research Article
- 10.1080/10494820.2025.2605487
- Jan 1, 2026
- Interactive Learning Environments
Students are increasingly using AI chatbots for educational purposes in higher education, yet there are concerns about their effects on students’ critical thinking. Although developing AI literacy is considered essential for addressing these concerns, limited evidence exists regarding whether – and how – AI literacy mediates the relationship between chatbot use and critical thinking. This study employed an exploratory descriptive-correlational research design and collected data from 384 higher education students using an online self-report questionnaire. The data were analyzed through structural equation modeling (SEM) to evaluate both direct and mediating effects. The findings indicated a negative direct effect of AI chatbot use and a positive direct effect of AI literacy on students’ critical thinking. Moreover, AI literacy demonstrated a significant mediating role in the relationship between chatbot use and critical thinking. These results highlight the importance of strengthening students’ AI literacy skills to mitigate the potential adverse cognitive impacts associated with excessive AI chatbot use, particularly in relation to critical thinking.
- Research Article
9
- 10.1016/j.chb.2024.108460
- Oct 1, 2024
- Computers in Human Behavior
What drives AI-based risk information-seeking intent? Insufficiency of risk information versus (Un)certainty of AI chatbots
- Research Article
1
- 10.46743/1540-580x/2022.2204
- Sep 30, 2022
- Internet Journal of Allied Health Sciences and Practice
Purpose: Clinical reasoning (CR) is the ability to integrate the knowledge of diagnoses with the use of supporting theories to create effective, client-centered interventions. One means of teaching CR to rehabilitation students is using standardized patient (SP) experiences. The relationship between faculty and student CR ratings after SP experiences has not been researched. The purpose of the study was to determine if there would be correlations between physical therapy (PT) and occupational therapy (OT) student and faculty ratings of CR skills after an SP experience. Method: The Clinical Reasoning Assessment Tool (CRAT) was used by students to self-reflect on their CR performance after an SP experience and compared to their respective faculty ratings. The CRAT includes three subsections: content knowledge, procedural knowledge, and conceptual reasoning, each with a visual analog scale. Correlations between students’ self-assessment of CR and faculty reviews were analyzed using Spearman’s rho correlations. Results: Seventeen PT and seventeen OT students participated. Spearman’s rho correlation coefficients for the PT students and their faculty were: content knowledge (r=.180; p=.488), procedural knowledge (r=.697; p=.002), and conceptual reasoning (r=.258; p=.317). Spearman’s rho correlation coefficients for the OT students and their faculty were: content knowledge (r=.103; p=.693), procedural knowledge (r=.676; p=.003), and conceptual reasoning (r=.505; p=.039). Conclusions: Neither PT nor OT student ratings was a statistically significant correlation in content knowledge ratings in relation to respective faculty ratings. Both PT and OT student procedural knowledge rating correlations with faculty were strong and statistically significant. PT student and faculty ratings were not significantly correlated in conceptual reasoning compared to faculty; however, OT students and faculty ratings were strong, had positive correlations, and were statistically significant. Further research is needed to assess students’ CR development longitudinally across curricula.
- Research Article
2
- 10.1016/j.pec.2025.109271
- Nov 1, 2025
- Patient education and counseling
Exploring the influence of privacy concerns, AI literacy, and perceived health stigma on AI chatbot use in healthcare: An uncertainty reduction approach.
- Research Article
4
- 10.1108/oir-06-2024-0375
- Feb 26, 2025
- Online Information Review
Purpose This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from previous research that primarily viewed AI chatbots from a non-anthropomorphic approach, this paper contends that AI chatbots are taking on an emotional component for humans. This study thus explores this topic by considering both rational and non-rational perspectives, thereby providing a more comprehensive understanding of user behavior in digital environments. Design/methodology/approach Employing a questionnaire survey (N = 480), this research focuses on young users who regularly engage with AI chatbots. Drawing upon the parasocial interaction theory and privacy calculus theory, the study elucidates the mechanisms governing users’ willingness to disclose information. Findings Findings show that cognitive, emotional and behavioral dimensions all positively influence perceived benefits of using ChatGPT, which in turn enhances privacy disclosure. While cognitive, emotional and behavioral dimensions negatively impact perceived risks, only the emotional and behavioral dimensions significantly affect perceived risk, which in turn negatively influences privacy disclosure. Notably, the cognitive dimension’s lack of significant mediating effect suggests that users’ awareness of privacy risks does not deter disclosure. Instead, emotional factors drive privacy decisions, with users more likely to disclose personal information based on positive experiences and engagement with ChatGPT. This confirms the existence of the privacy paradox. Research limitations/implications This study acknowledges several limitations. While the sample was adequately stratified, the focus was primarily on young users in China. Future research should explore broader demographic groups, including elderly users, to understand how different age groups engage with AI chatbots. Additionally, although the study was conducted within the Chinese context, the findings have broader applicability, highlighting the potential for cross-cultural comparisons. Differences in user attitudes toward AI chatbots may arise due to cultural variations, with East Asian cultures typically exhibiting a more positive attitude toward social AI systems compared to Western cultures. This cultural distinction—rooted in Eastern philosophies such as animism in Shintoism and Buddhism—suggests that East Asians are more likely to anthropomorphize technology, unlike their Western counterparts (Yam et al., 2023; Folk et al., 2023). Practical implications The findings of this study offer valuable insights for developers, policymakers and educators navigating the rapidly evolving landscape of intelligent technologies. First, regarding technology design, the study suggests that AI chatbot developers should not focus solely on functional aspects but also consider emotional and social dimensions in user interactions. By enhancing emotional connection and ensuring transparent privacy communication, developers can significantly improve user experiences (Meng and Dai, 2021). Second, there is a pressing need for comprehensive user education programs. As users tend to prioritize perceived benefits over risks, it is essential to raise awareness about privacy risks while also emphasizing the positive outcomes of responsible information sharing. This can help foster a more informed and balanced approach to user engagement (Vimalkumar et al., 2021). Third, cultural and ethical considerations must be incorporated into AI chatbot design. In collectivist societies like China, users may prioritize emotional satisfaction and societal harmony over privacy concerns (Trepte, 2017; Johnston, 2009). Developers and policymakers should account for these cultural factors when designing AI systems. Furthermore, AI systems should communicate privacy policies clearly to users, addressing potential vulnerabilities and ensuring that users are aware of the extent to which their data may be exposed (Wu et al., 2024). Lastly, as AI chatbots become deeply integrated into daily life, there is a growing need for societal discussions on privacy norms and trust in AI systems. This research prompts a reflection on the evolving relationship between technology and personal privacy, especially in societies where trust is shaped by cultural and emotional factors. Developing frameworks to ensure responsible AI practices while fostering user trust is crucial for the long-term societal integration of AI technologies (Nah et al., 2023). Originality/value The study’s findings not only draw deeper theoretical insights into the role of emotions in generative artificial intelligence (gAI) chatbot engagement, enriching the emotional research orientation and framework concerning chatbots, but they also contribute to the literature on human–computer interaction and technology acceptance within the framework of the privacy calculus theory, providing practical insights for developers, policymakers and educators navigating the evolving landscape of intelligent technologies.
- Research Article
4
- 10.1186/s41073-025-00158-y
- Feb 28, 2025
- Research Integrity and Peer Review
BackgroundArtificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors’ responsible use of AI chatbots.MethodsThis study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023—December 2023). Data was categorized into policy elements, such as ‘proofreading’ and ‘image generation’. Counts and percentages of ‘yes’ (i.e., permitted), ‘no’, and ‘no available information’ (NAI) were established for each policy element.ResultsA total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors’ use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors.ConclusionsOnly a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12–18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.
- Research Article
- 10.1016/j.jpurol.2025.08.029
- Dec 1, 2025
- Journal of pediatric urology
Quality of information on hypospadias from artificial intelligence chatbots: How safe is AI for patient and family information?
- Research Article
11
- 10.55908/sdgs.v11i4.794
- Aug 24, 2023
- Journal of Law and Sustainable Development
Purpose: This study examines the process formation of customer loyalty and customer value co-creation towards AI chatbots by exploring the successive effects of perceived value aspects, perceived information quality, technological self-efficacy for online trust, aspects of loyalty, and value co-creation. Theorical framework: The increasingly strong reception of humans for a new wave of digitalization has promoted the need to learn about customer loyalty and customers' value co-creation formation for businesses applying AI chatbots to their operations business to attract and retain customers. The study utilized the perceived value dimension, as well as perceived information quality, technological self-efficacy, and online trust, to comprehend loyalty and value co-creation. Design/methodology/approach: The study was conducted using a self-administered questionnaire survey with 447 participants, who had used Pizza Hut's AI chatbot service in Vietnam. The data was analyzed by integrating two techniques: partial least square structural equation modeling (PLS-SEM) and artificial neural networks (ANN). Findings: The results show that aspects of perceived value, perceived information quality, and technological self-efficacy all have a significant impact on online trust except hedonic value, which in turn leads to the formation of aspects of loyalty and high ability to create value co-creation. The analysis results show that perceived information quality has a stronger impact on online trust than technological self-efficacy. In addition, the non-linear results from the ANN analysis show that attitudinal loyalty has relatively stronger importance for value co-creation than behavioral loyalty. Research, Practical & Social Implication: This study contributes to the emerging literature on the use of AI chatbots by investigating the possibility of consumers and providers co-creating value. Second, in this study, the authors delved into the internal aspects of loyalty and separated it into two primary aspects, behavioral and attitudinal, in order to clarify their impact on the factors that influence AI chatbot and value co-creation. In conclusion, this research contributes to the existing body of knowledge by providing a more multidimensional perspective on theories. Originality/value: The integration of PLS-SEM and ANN techniques into the analysis to simultaneously explore both linear and non-linear mechanisms of this study explained the influence of aspects of perceived value, perceived information quality, and technological self-efficacy on aspects of loyalty and value co-creation via online trust in AI chatbots context. In addition, this study extends the perceived value to explore the impact of internal and external personal factors on AI chatbots.
- Research Article
15
- 10.33422/ijarme.v5i4.961
- Jan 7, 2023
- International Journal of Applied Research in Management and Economics
The Purpose: This study investigates the role of Artificial Intelligence- chatbot (AI chatbot) quality and AI chatbot users across various banking needs and its impact on customer acceptance of AI chatbots through the mediating role of perceived usefulness and ease of use. Design/methodology/approach – This quantitative study uses a cross-sectional time dimension. The questionnaire of this study was developed using multiple academic sources. Partial least square structural equation modeling was used to analyze the data, and the SmartPLS 4 software was used for the calculation. Finding - The findings indicated a significant positive direct relationship between AI chatbot quality and acceptance of AI chatbot (path coefficient of 0.138 and p-value of 0.022). At the same time, the direct relationship between the AI-chatbot user and the acceptance of the AI chatbot was insignificant (path coefficient = 0.0.096, and p-value = 0.246). While the results of the indirect relationship reveal that perceived usefulness and ease of use partially mediated the relationship between AI chatbot quality and acceptance of AI chatbots. The perceived usefulness and ease of use fully mediated the relationship between AI chatbot users and acceptance of the AI chatbot. Originality/value – The results of this study developed a framework for banking and other customer-oriented businesses in understanding and developing AI chatbots to address customer needs.
- Research Article
- 10.22610/imbr.v17i3(i)s.4726
- Oct 14, 2025
- Information Management and Business Review
Technology has significantly enhanced access to education, enabling students to benefit from lower costs and higher productivity. This advancement has been further expedited by the incorporation of AI, especially through applications like AI chatbots. Despite these advancements, students in higher education- especially those enrolled in public universities in developing countries- often exhibit scepticism toward adopting AI chatbots for academic assistance. Thus, the main goal of this study is to find out how students in public universities in developing nations plan to use AI chatbots as teaching aids. The study will use a purposive sampling strategy to gather data from students at different public universities, with a focus on Malaysia. To construct a robust conceptual model that explains students' behavioural intentions toward AI chatbot usage, this research adopts an integrated framework combining the Technology Acceptance Model (TAM) and the Task–Technology Fit (TTF) theory. For data analysis, the study will use the PLS-SEM approach, a well-regarded regression technique suitable for examining complex relationships in structural models. The research's conclusions are anticipated to provide theoretical and practical insights. This study may theoretically add to the body of knowledge on AI chatbot adoption by illuminating the complex human behaviours linked to the adoption of new technologies. Practically, the findings may give governments, university administrators, and AI chatbot developers important information on how to work together to create AI chatbots that are more dependable, user-friendly, and attractive to students, especially in developing countries.
- Research Article
102
- 10.1111/bjet.13454
- Mar 22, 2024
- British Journal of Educational Technology
In recent years, AI technologies have been developed to promote students' self‐regulated learning (SRL) and proactive learning in digital learning environments. This paper discusses a comparative study between generative AI‐based (SRLbot) and rule‐based AI chatbots (Nemobot) in a 3‐week science learning experience with 74 Secondary 4 students in Hong Kong. The experimental group used SRLbot to maintain a regular study habit and facilitate their SRL, while the control group utilized rule‐based AI chatbots. Results showed that SRLbot effectively enhanced students' science knowledge, behavioural engagement and motivation. Quantile regression analysis indicated that the number of interactions significantly predicted variations in SRL. Students appreciated the personalized recommendations and flexibility of SRLbot, which adjusted responses based on their specific learning and SRL scenarios. The ChatGPT‐enhanced instructional design reduced learning anxiety and promoted learning performance, motivation and sustained learning habits. Students' feedback on learning challenges, psychological support and self‐regulation behaviours provided insights into their progress and experience with this technology. SRLbot's adaptability and personalized approach distinguished it from rule‐based chatbots. The findings offer valuable evidence for AI developers and educators to consider generative AI settings and chatbot design, facilitating greater success in online science learning. Practitioner notes What is already known about this topic AI technologies have been used to support student self‐regulated learning (SRL) across subjects. SRL has been identified as an important aspect of student learning that can be developed through technological support. Generative AI technologies like ChatGPT have shown potential for enhancing student learning by providing personalized guidance and feedback. What this paper adds This paper reports on a case study that specifically examines the effectiveness of ChatGPT in promoting SRL among secondary students. The study provides evidence that ChatGPT can enhance students' science knowledge, motivation and SRL compared to a rule‐based AI chatbot. The study offers insights into how ChatGPT can be used as a tool to facilitate SRL and promote sustained learning habits. Implications for practice and/or policy The findings of this study suggest that educators should consider the potential of ChatGPT and other generative AI technologies to support student learning and SRL. Educators and students should be aware of the limitations of AI technologies and ensure that they are used appropriately to generate desired responses. It is also important to equip teachers and students with AI competencies to enable them to use AI for learning and teaching.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.