Real-Time Fraud Detection and Prevention Based on Artificial Intelligence Tools
Real-Time Fraud Detection and Prevention Based on Artificial Intelligence Tools
- Research Article
1
- 10.12688/mep.20554.1
- Oct 23, 2024
- MedEdPublish
Background ChatGPT is an open-source large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.12688/mep.20554.3
- Apr 28, 2025
- MedEdPublish
Background ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.12688/mep.20554.2
- Jan 23, 2025
- MedEdPublish (2016)
ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants' socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34-50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Book Chapter
- 10.4018/979-8-3693-8292-9.ch021
- Feb 28, 2025
Higher education institutions throughout the world are challenged by the influx of Artificial Intelligence (AI) tools into education. Hence, the awareness and use of AI tools in education among the educators and students in higher education and their perspectives about AI are crucial and essential. This chapter comprised of a study exploring the awareness, use and perspectives on AI among educators and students in some government and private sectors of higher education. An average number of educators and a vast number of students are aware and use emerging AI tools like ChatGPT or a similar application. Educators are divided on the views that AI tools are well known in the University. Students consider that AI tools are an essential tool for undergraduate students' success. Professional discussion on AI tools in education are suggested for educators in higher education and students' insights are important in planning teaching and learning activities
- Research Article
- 10.2478/ctra-2025-0007
- Jan 1, 2025
- Creativity. Theories – Research - Applications
The expansion of artificial intelligence (AI) tools has brought about new opportunities and challenges for teachers and students. These tools have the potential to reshape teaching and stimulate both students’ and teachers’ creativity. In 21st-century education, creativity emerges as a key skill that encompasses problem-solving, innovation, adaptability, critical thinking, and cognitive development. AI tools also provide personalized assistance and feedback as well as customized study materials. Moreover, they have proven beneficial in cultivating critical thinking and enhancing students’ research skills. Instead of questioning teachers’ preparedness for AI technologies, the focus should be on discovering ways to effectively and creatively integrate these tools into the classroom. This paper explores the possibilities of implementing generative AI tools to promote students’ creativity, thus enhancing the overall quality of teaching. In the Croatian educational system, similarly to Poland, school pedagogues should encourage positive changes within the school culture. Therefore, this paper also underscores the role of school pedagogues in bridging the gap between teachers and AI tools as an educational innovation. School pedagogues should be instrumental in supporting teachers during the integration of AI tools into their teaching by showcasing practical applications and emphasizing potential benefits for student engagement and learning outcomes. In this capacity, school pedagogues bear the responsibility of fostering a reflective and critical approach towards AI tools, advocating creative yet responsible use of technology in the classroom.
- Research Article
4
- 10.1108/lhtn-08-2024-0131
- Sep 17, 2024
- Library Hi Tech News
PurposeThe purpose of the paper is to explore the rapidly evolving landscape of artificial intelligence (AI) tools in academic research, highlighting their potential to transform various stages of the research process. AI tools are transforming academic research, offering numerous benefits and challenges.Design/methodology/approachAcademic research is undergoing a significant transformation with the emergence of (AI) tools. These tools have the potential to revolutionize various aspects of research, from literature review to writing and proofreading. An overview of AI applications in literature review, data analysis, writing and proofreading, discussing their benefits and limitations is given. A comprehensive review of existing literature on AI applications in academic research was conducted, focusing on tools and platforms used in various stages of the research process. AI was used in some of the searches for AI applications in use.FindingsThe analysis reveals that AI tools can enhance research efficiency, accuracy and quality, but also raise important ethical and methodological considerations. AI tools have the potential to significantly enhance academic research, but their adoption requires careful consideration of methodological and ethical implications. The integration of AI tools also raises questions about authorship, accountability and the role of human researchers. The authors conclude by outlining future directions for AI integration in academic research and emphasizing the need for responsible adoption.Originality/valueAs AI continues to evolve, it is essential for researchers, institutions and policymakers to address the ethical and methodological implications of AI adoption, ensuring responsible integration and harnessing the full potential of AI tools to advance academic research. This is the contribution of the paper to knowledge.
- Research Article
34
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
- 10.1136/bmjopen-2025-099921
- Oct 15, 2025
- BMJ open
Systematic literature reviews (SLRs) are essential for synthesising research evidence and guiding informed decision-making. However, SLRs require significant resources and substantial efforts in terms of workload. The introduction of artificial intelligence (AI) tools can reduce this workload. This study aims to investigate the preferences in SLR screening, focusing on trade-offs related to tool attributes. A discrete choice experiment (DCE) was performed in which participants completed 13 or 14 choice tasks featuring AI tools with varying attributes. Data were collected via an online survey, where participants provided background on their education and experience. Professionals who have published SLRs registered on Pubmed, or who were affiliated with a recent Health Economics and Outcomes Research conference were included as participants. The use of a hypothetical AI tool in SLRs with different attributes was considered by the participants. Key attributes for AI tools were identified through a literature review and expert consultations. These attributes included the AI tool's role in screening, required user proficiency, sensitivity, workload reduction and the investment needed for training. The participants' adoption of the AI tool, that is, the likelihood of preferring the AI tool in the choice experiment, considering different configurations of attribute levels, as captured through the DCE choice tasks. Statistical analysis was performed using conditional multinomial logit. An additional analysis was performed by including the demographic characteristics (such as education, experience with SLR publication and familiarity with AI) as interaction variables. The study received responses from 187 participants with diverse experience in performing SLRs and AI use. The familiarity with AI was generally low, with 55.6% of participants being (very) unfamiliar with AI. In contrast, intermediate proficiency in AI tools is positively associated with adoption (p=0.030). Similarly, workload reduction is also strongly linked to adoption (p<0.001). Interestingly, if expert proficiency is needed for the AI, authors with more scientific experience in their profession are less likely to adopt AI (p=0.009). However, more experience specifically with SLR publications increases AI adoption likelihood (p=0.001). The findings suggest that workload reduction is not the only consideration for SLR reviewers when using AI tools. The key to AI adoption in SLRs is creating reliable, workload-reducing tools that assist rather than replace human reviewers, with moderate proficiency requirements and high sensitivity.
- Research Article
- 10.22214/ijraset.2025.76442
- Dec 31, 2025
- International Journal for Research in Applied Science and Engineering Technology
Artificial Intelligence (AI) tools are increasingly viewed as technologies that can improve teaching and learning. However, there is limited empirical evidence from low-income government school systems. This study examines the relationship between AI tool use, teacher readiness for AI, and student learning outcomes in government schools in Nepal. A quantitative cross-sectional survey design was used. Primary data were collected from 412 government school teachers across 78 schools in all seven provinces during education and technology programs conducted by the author’s non-profit organization, Vidhata, in 2024. The key variables included AI Tool Usage Score, Teacher AI Readiness Score, Student Learning Outcome Score, and School Infrastructure Index. The data were analyzed using multiple regression analysis, mediation analysis, and independent-sample ttests. The results show that AI tool use is positively associated with student learning outcomes (β = 0.34, p < 0.001). Teacher AI readiness partially mediates this relationship and explains approximately 38 percent of the total effect. Significant differences were found between urban and rural schools, between trained and untrained teachers, and between schools with high and low levels of infrastructure. These findings suggest that AI tools can support teaching and learning in Nepal’s government schools. However, their effectiveness depends strongly on teacher capacity and the availability of adequate infrastructure. The study provides policyrelevant evidence to support the equitable and sustainable integration of AI into Nepal’s public education system.
- Research Article
1
- 10.1093/ecco-jcc/jjac190.0907
- Jan 30, 2023
- Journal of Crohn's and Colitis
P777 Deployment of an artificial intelligence tool for precision medicine in ulcerative colitis: Preliminary data from 8 globally distributed clinical sites
- Research Article
- 10.14444/8778
- Jul 14, 2025
- International journal of spine surgery
Cross-sectional survey study BACKGROUND: Artificial intelligence (AI) tools are increasingly integrated into various aspects of medicine, including medical research. However, the scope and manner in which early-career surgeons utilize AI tools in their research remain inadequately understood. This study aimed to investigate the frequency and specific applications of AI tools in medical research among early-career surgeons, including their perceptions, concerns, and outlook regarding AI in research. A survey comprising 25 questions was distributed among members of an international club of early-career spine surgeons (<10 years of experience). The survey assessed demographics, AI tool utilization, access to AI training resources, and perceptions of AI benefits and concerns in research. Sixty early-career surgeons participated, with 86.7% reporting AI tool use in their research. ChatGPT was the most frequently utilized tool, with a usage rate of 93.1%. AI tools were primarily used for grammatical proofreading (69.6%) and rephrasing (64.3%), while 26.8% of participants used AI for statistical analysis. While 80.4% perceived improved efficiency as a key benefit, 70.0% expressed concerns about reliability. None of the participants had received formal AI training, and only 15.0% had access to AI mentors. Despite these challenges, 91.6% anticipated a positive long-term impact of AI on research. AI tools are widely adopted among early-career surgeons for various research tasks, extending from text generation to data analysis. However, the absence of formal training and concerns regarding the reliability of AI tools underscore the necessity of training for AI integration in medical research. This study provides timely insights into AI adoption patterns among early-career surgeons, highlighting the urgent need for formal AI training programs to ensure responsible research practices.
- Research Article
6
- 10.33407/itlt.v100i2.5563
- Apr 30, 2024
- Information Technologies and Learning Tools
Due to the rapid development of information technologies, the use of digital solutions for educational purposes is becoming increasingly relevant and promising. This encourages the evaluation and development of new methods that provide a personalized approach to teaching and learning, including integration of artificial intelligence (AI) tools that can revolutionize education. However, the question of teachers’ qualifications regarding the use of AI tools in their pedagogical activities, especially in social and humanitarian disciplines, remains unexplored. Therefore, the research goal is to investigate the level of competence of teachers of social and humanitarian disciplines when using AI tools in educational activities which involves educating users about the capabilities, limitations and proper use of these tools, understanding how to benefit from AI and how to avoid misuse. The article reveals the advantages and challenges of applying AI tools, and provides an analysis of some specifics when implementing various AI tools in social and humanitarian disciplines. During the pedagogical experiment, the authors did not limit themselves to the use of the ChatGPT tool; the teachers had an opportunity to explore the features of a number of AI tools that contribute to lesson planning, and generate visual content, text materials, and tasks for them. The expediency of using various AI tools was experimentally verified. Advantages and significant disadvantages as well as subject nuances were determined in practice. In addition, the authors found out the current level of competence of teachers of social and humanitarian disciplines. At the control stage, the authors analysed and summarized the dynamics of the levels of formation of the indicators of teachers’ competence like awareness of the use of AI tools in educational activities; flexibility and ability to adapt when working with AI tools; assessment of confidence when implementing AI tools in educational activities. The conclusions emphasize both the need for further study of the issues of using the educational potential of AI tools and the development of teachers’ digital competence, as well as the formation of a conscious understanding of the risks and limitations in this area by students and teachers.
- Research Article
3
- 10.24093/awej/call10.11
- Jul 28, 2024
- Arab World English Journal
This study explored the impacts of artificial intelligence (AI) tools on English as a foreign language (EFL) reading instruction. The main aim was to examine EFL learners’ perceptions of using AI tools in their EFL reading classes and explore how those tools could impact their learning. The study tried to answer those questions: What were EFL learners’ perceptions of AI tools in reading instruction? And, how could AI tools impact EFL learners’ reading skills? To achieve the objectives, an online survey was used to investigate EFL learners’ perspectives on using AI tools and their effects in instructing reading. The findings indicated that learners had positive perceptions of using AI tools in their learning because they helped improve their reading skills and increased their confidence and motivation in reading. In addition, using AI tools for instructing reading enhanced EFL learners’ skills because they provided supportive and adaptive learning tailored to their needs. However, concerns were raised regarding long-term impacts and optimal integration models. The findings suggested AI showed promise for supporting reading instruction when combined judiciously with traditional methods. The study recommended EFL instructors consider the strategic blending of AI tools in the classroom to enhance reading proficiency and motivation.
- Discussion
50
- 10.1016/j.arthro.2023.01.014
- Feb 1, 2023
- Arthroscopy: The Journal of Arthroscopic & Related Surgery
How Will Artificial Intelligence Affect Scientific Writing, Reviewing and Editing? The Future is Here …
- Research Article
6
- 10.1044/2024_ajslp-24-00218
- Nov 4, 2024
- American journal of speech-language pathology
This project explores the perceived implications of artificial intelligence (AI) tools and generative language tools, like ChatGPT, on practice in speech-language pathology. A total of 107 clinician (n = 60) and student (n = 47) participants completed an 87-item survey that included Likert-style questions and open-ended qualitative responses. The survey explored participants' current frequency of use, experience with AI tools, ethical concerns, and concern with replacing clinicians, as well as likelihood to use in particular professional and clinical areas. Results were analyzed in the context of qualitative responses to typed-response open-ended questions. A series of analyses indicated participants are somewhat knowledgeable and experienced with GPT software and other AI tools. Despite a positive outlook and the belief that AI tools are helpful for practice, programs like ChatGPT and other AI tools are infrequently used by speech-language pathologists and students for clinical purposes, mostly restricted to administrative tasks. While impressions of GPT and other AI tools cite the beneficial ways that AI tools can enhance a clinician's workloads, participants indicate a hesitancy to use AI tools and call for institutional guidelines and training for its adoption.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.