Clinician Perceptions of Socrates 2.0: A Multi-Agent Artificial Intelligence Tool to Facilitate Socratic Dialogue
Clinician Perceptions of Socrates 2.0: A Multi-Agent Artificial Intelligence Tool to Facilitate Socratic Dialogue
- Preprint Article
- 10.2196/preprints.80461
- Jul 10, 2025
BACKGROUND Innovative, scalable mental health tools are needed to address systemic provider shortages and accessibility barriers. Large language model (LLM)-based tools can provide real-time, tailored feedback to help users engage in cognitive reappraisal outside traditional therapy sessions. Socrates 2.0 is a multi-agent artificial intelligence (AI) tool that guides users through Socratic dialogue. OBJECTIVE Using a mixed method approach, this study examined the feasibility, acceptability, and potential for symptom reduction of Socrates 2.0. METHODS Sixty-one adults enrolled in a four-week mixed-methods pre-clinical feasibility study. Participants used Socrates 2.0 as desired and completed self-report measures of depression, social anxiety, posttraumatic stress, and obsessive-compulsive symptoms at baseline and one-month follow-up. Feasibility, acceptability, and appropriateness along with usability and working alliance were assessed via validated measures. Semi-structured interviews explored user experiences and perceptions. RESULTS Participants engaged with Socrates 2.0 an average of 6.70 (SD=4.57) times over four weeks. Feasibility (mean=4.26, SD=0.67), acceptability (mean=4.16, SD=0.84), and usability ratings were high. Participants reported moderate reductions in depression (effect size d=0.30), social anxiety (d=0.25), obsessive-compulsive (d=0.33), and posttraumatic stress (d=0.28) symptoms. Working alliance scores suggested a moderately strong perceived bond with the AI tool. Qualitative feedback indicated that the nonjudgmental, on-demand nature of Socrates 2.0 encouraged self-reflection and exploration. Some users critiqued the repeated questioning style and limited conversation depth. CONCLUSIONS Socrates 2.0 was perceived as feasible, acceptable, and moderately helpful for self-guided cognitive reappraisal, demonstrating potential as an adjunct to traditional therapy. Further research, including randomized trials, is needed to determine effectiveness across different populations, optimize personalization, and address repetitive conversational loops. CLINICALTRIAL Held P, Pridgen S, Chen Y, Akhtar Z, Amin D, Pohorence S A Novel Cognitive Behavioral Therapy–Based Generative AI Tool (Socrates 2.0) to Facilitate Socratic Dialogue: Protocol for a Mixed Methods Feasibility Study JMIR Res Protoc 2024;13:e58195 URL: https://www.researchprotocols.org/2024/1/e58195 DOI: 10.2196/58195
- Research Article
34
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
4
- 10.1108/lhtn-08-2024-0131
- Sep 17, 2024
- Library Hi Tech News
PurposeThe purpose of the paper is to explore the rapidly evolving landscape of artificial intelligence (AI) tools in academic research, highlighting their potential to transform various stages of the research process. AI tools are transforming academic research, offering numerous benefits and challenges.Design/methodology/approachAcademic research is undergoing a significant transformation with the emergence of (AI) tools. These tools have the potential to revolutionize various aspects of research, from literature review to writing and proofreading. An overview of AI applications in literature review, data analysis, writing and proofreading, discussing their benefits and limitations is given. A comprehensive review of existing literature on AI applications in academic research was conducted, focusing on tools and platforms used in various stages of the research process. AI was used in some of the searches for AI applications in use.FindingsThe analysis reveals that AI tools can enhance research efficiency, accuracy and quality, but also raise important ethical and methodological considerations. AI tools have the potential to significantly enhance academic research, but their adoption requires careful consideration of methodological and ethical implications. The integration of AI tools also raises questions about authorship, accountability and the role of human researchers. The authors conclude by outlining future directions for AI integration in academic research and emphasizing the need for responsible adoption.Originality/valueAs AI continues to evolve, it is essential for researchers, institutions and policymakers to address the ethical and methodological implications of AI adoption, ensuring responsible integration and harnessing the full potential of AI tools to advance academic research. This is the contribution of the paper to knowledge.
- Book Chapter
- 10.4018/979-8-3693-8292-9.ch021
- Feb 28, 2025
Higher education institutions throughout the world are challenged by the influx of Artificial Intelligence (AI) tools into education. Hence, the awareness and use of AI tools in education among the educators and students in higher education and their perspectives about AI are crucial and essential. This chapter comprised of a study exploring the awareness, use and perspectives on AI among educators and students in some government and private sectors of higher education. An average number of educators and a vast number of students are aware and use emerging AI tools like ChatGPT or a similar application. Educators are divided on the views that AI tools are well known in the University. Students consider that AI tools are an essential tool for undergraduate students' success. Professional discussion on AI tools in education are suggested for educators in higher education and students' insights are important in planning teaching and learning activities
- Research Article
- 10.22214/ijraset.2025.76442
- Dec 31, 2025
- International Journal for Research in Applied Science and Engineering Technology
Artificial Intelligence (AI) tools are increasingly viewed as technologies that can improve teaching and learning. However, there is limited empirical evidence from low-income government school systems. This study examines the relationship between AI tool use, teacher readiness for AI, and student learning outcomes in government schools in Nepal. A quantitative cross-sectional survey design was used. Primary data were collected from 412 government school teachers across 78 schools in all seven provinces during education and technology programs conducted by the author’s non-profit organization, Vidhata, in 2024. The key variables included AI Tool Usage Score, Teacher AI Readiness Score, Student Learning Outcome Score, and School Infrastructure Index. The data were analyzed using multiple regression analysis, mediation analysis, and independent-sample ttests. The results show that AI tool use is positively associated with student learning outcomes (β = 0.34, p < 0.001). Teacher AI readiness partially mediates this relationship and explains approximately 38 percent of the total effect. Significant differences were found between urban and rural schools, between trained and untrained teachers, and between schools with high and low levels of infrastructure. These findings suggest that AI tools can support teaching and learning in Nepal’s government schools. However, their effectiveness depends strongly on teacher capacity and the availability of adequate infrastructure. The study provides policyrelevant evidence to support the equitable and sustainable integration of AI into Nepal’s public education system.
- Research Article
1
- 10.12688/mep.20554.1
- Oct 23, 2024
- MedEdPublish
Background ChatGPT is an open-source large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.12688/mep.20554.3
- Apr 28, 2025
- MedEdPublish
Background ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.12688/mep.20554.2
- Jan 23, 2025
- MedEdPublish (2016)
ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants' socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34-50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.2478/ctra-2025-0007
- Jan 1, 2025
- Creativity. Theories – Research - Applications
The expansion of artificial intelligence (AI) tools has brought about new opportunities and challenges for teachers and students. These tools have the potential to reshape teaching and stimulate both students’ and teachers’ creativity. In 21st-century education, creativity emerges as a key skill that encompasses problem-solving, innovation, adaptability, critical thinking, and cognitive development. AI tools also provide personalized assistance and feedback as well as customized study materials. Moreover, they have proven beneficial in cultivating critical thinking and enhancing students’ research skills. Instead of questioning teachers’ preparedness for AI technologies, the focus should be on discovering ways to effectively and creatively integrate these tools into the classroom. This paper explores the possibilities of implementing generative AI tools to promote students’ creativity, thus enhancing the overall quality of teaching. In the Croatian educational system, similarly to Poland, school pedagogues should encourage positive changes within the school culture. Therefore, this paper also underscores the role of school pedagogues in bridging the gap between teachers and AI tools as an educational innovation. School pedagogues should be instrumental in supporting teachers during the integration of AI tools into their teaching by showcasing practical applications and emphasizing potential benefits for student engagement and learning outcomes. In this capacity, school pedagogues bear the responsibility of fostering a reflective and critical approach towards AI tools, advocating creative yet responsible use of technology in the classroom.
- Research Article
- 10.1136/bmjopen-2025-099921
- Oct 15, 2025
- BMJ open
Systematic literature reviews (SLRs) are essential for synthesising research evidence and guiding informed decision-making. However, SLRs require significant resources and substantial efforts in terms of workload. The introduction of artificial intelligence (AI) tools can reduce this workload. This study aims to investigate the preferences in SLR screening, focusing on trade-offs related to tool attributes. A discrete choice experiment (DCE) was performed in which participants completed 13 or 14 choice tasks featuring AI tools with varying attributes. Data were collected via an online survey, where participants provided background on their education and experience. Professionals who have published SLRs registered on Pubmed, or who were affiliated with a recent Health Economics and Outcomes Research conference were included as participants. The use of a hypothetical AI tool in SLRs with different attributes was considered by the participants. Key attributes for AI tools were identified through a literature review and expert consultations. These attributes included the AI tool's role in screening, required user proficiency, sensitivity, workload reduction and the investment needed for training. The participants' adoption of the AI tool, that is, the likelihood of preferring the AI tool in the choice experiment, considering different configurations of attribute levels, as captured through the DCE choice tasks. Statistical analysis was performed using conditional multinomial logit. An additional analysis was performed by including the demographic characteristics (such as education, experience with SLR publication and familiarity with AI) as interaction variables. The study received responses from 187 participants with diverse experience in performing SLRs and AI use. The familiarity with AI was generally low, with 55.6% of participants being (very) unfamiliar with AI. In contrast, intermediate proficiency in AI tools is positively associated with adoption (p=0.030). Similarly, workload reduction is also strongly linked to adoption (p<0.001). Interestingly, if expert proficiency is needed for the AI, authors with more scientific experience in their profession are less likely to adopt AI (p=0.009). However, more experience specifically with SLR publications increases AI adoption likelihood (p=0.001). The findings suggest that workload reduction is not the only consideration for SLR reviewers when using AI tools. The key to AI adoption in SLRs is creating reliable, workload-reducing tools that assist rather than replace human reviewers, with moderate proficiency requirements and high sensitivity.
- Research Article
- 10.14444/8778
- Jul 14, 2025
- International journal of spine surgery
Cross-sectional survey study BACKGROUND: Artificial intelligence (AI) tools are increasingly integrated into various aspects of medicine, including medical research. However, the scope and manner in which early-career surgeons utilize AI tools in their research remain inadequately understood. This study aimed to investigate the frequency and specific applications of AI tools in medical research among early-career surgeons, including their perceptions, concerns, and outlook regarding AI in research. A survey comprising 25 questions was distributed among members of an international club of early-career spine surgeons (<10 years of experience). The survey assessed demographics, AI tool utilization, access to AI training resources, and perceptions of AI benefits and concerns in research. Sixty early-career surgeons participated, with 86.7% reporting AI tool use in their research. ChatGPT was the most frequently utilized tool, with a usage rate of 93.1%. AI tools were primarily used for grammatical proofreading (69.6%) and rephrasing (64.3%), while 26.8% of participants used AI for statistical analysis. While 80.4% perceived improved efficiency as a key benefit, 70.0% expressed concerns about reliability. None of the participants had received formal AI training, and only 15.0% had access to AI mentors. Despite these challenges, 91.6% anticipated a positive long-term impact of AI on research. AI tools are widely adopted among early-career surgeons for various research tasks, extending from text generation to data analysis. However, the absence of formal training and concerns regarding the reliability of AI tools underscore the necessity of training for AI integration in medical research. This study provides timely insights into AI adoption patterns among early-career surgeons, highlighting the urgent need for formal AI training programs to ensure responsible research practices.
- Research Article
1
- 10.1093/ecco-jcc/jjac190.0907
- Jan 30, 2023
- Journal of Crohn's and Colitis
P777 Deployment of an artificial intelligence tool for precision medicine in ulcerative colitis: Preliminary data from 8 globally distributed clinical sites
- Research Article
25
- 10.1515/tjb-2023-0254
- Dec 18, 2023
- Turkish Journal of Biochemistry
This paper discusses the integration of artificial intelligence (AI) tools in education, delineating their potential to transform pedagogical practices alongside the challenges they present. Generative AI models like ChatGPT, had a disruptive impact on teaching and learning, due to their ability to create text, images, and sound, revolutionizing educational content creation and modification. However, nowadays the educational community is polarized, with some embracing AI for its accessibility and efficiency thus advocating it as an indispensable tool, while others cautioning against risks to academic integrity and intellectual development. This document is designed to raise awareness about AI tools and provide some examples of how they can be used to improve education and learning. From an educator’s perspective, AI is an asset for curriculum development, course material preparation, instructional design and student assessment, while reducing bias and workload. For students, AI tools offer personalized learning experiences, timely feedback, and support in various academic activities. The Turkish Biochemical Society (TBS) Academy recommends educators to embrace and utilize AI tools to enhance educational processes, and engage in peer learning for better adaptation while maintaining a critical perspective on their utility and limitations. The transfer of AI knowledge and methods to the teaching experiences should complement and not replace the educator’s creativity and critical thinking. The paper advocates for an informed embrace of AI, AI fluency among educators and students, ethical application of AI in academic settings, and continuous engagement with the evolving AI technologies, ensuring that AI tools are used to augment critical thinking and contribute positively to education and society.
- Research Article
2
- 10.62049/jkncu.v5i1.177
- Dec 29, 2024
- Journal of the Kenya National Commission for UNESCO
The purpose of this study was to evaluate the effectiveness of Artificial Intelligence (AI) tools in teaching and learning in higher education institutions in Kenya, specifically focusing on Intelligent Tutoring Systems (ITS), Adaptive Learning Platforms, Virtual Learning Assistants (VLAs), Automated Grading Systems and Learning Analytics Systems (LAS), their accessibility use and its effectiveness in teaching and learning. The study employed a mixed-methods research design, combining both quantitative and qualitative approaches, to gather comprehensive data from faculty members, students, and administrators across 15 selected public and private universities and technical colleges in Kenya. The findings indicated that the accessibility of AI tools in institutions of higher learning in Kenya is significantly limited. A large majority of respondents expressed that AI tools are not readily available, highlighting disparities in access across different departments and projects within institutions. In terms of usage, the integration of AI tools into teaching and learning practices is still in its early stages in most institutions and where they are available they are not always well-integrated with existing curricula, leading to limited and uneven adoption across different disciplines. Despite these challenges, those who have begun using AI tools have reported benefits such as personalized learning, more efficient assessment processes, and enhanced feedback mechanisms, indicating that AI has the potential to transform educational practices if more effectively utilized. Findings further established a significant correlation between AI tools and effective teaching and learning in institutions of higher learning in Kenya (r = .781; p = .000). The study noted that while AI can significantly improve the educational experience, its current impact is constrained by several factors. Faculty members' unfamiliarity with AI, the lack of comprehensive training, and the inadequate integration of AI tools into the curriculum are major barriers to their effective use. However, where AI has been successfully implemented, it has contributed to better learning outcomes, higher student engagement, and more personalized feedback. The study recommended that institutions must invest in infrastructure, ongoing professional development, and curriculum integration, ensuring that AI tools are both accessible and effectively used to enhance teaching and learning outcomes.
- Research Article
30
- 10.1016/j.ejmp.2021.03.015
- Mar 1, 2021
- Physica Medica
Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.