ChatGPT Isn't Magic

  • Abstract
  • Literature Map
  • Citations
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).

CitationsShowing 10 of 33 papers
  • Book Chapter
  • 10.4018/979-8-3693-6824-4.ch004
Applications Areas, Possibilities, and Limitations of ChatGPT
  • May 28, 2024
  • Roheen Qamar + 1 more

AI developments have led to the creation of complex language models like ChatGPT, which can produce text that appears human. Although ChatGPT has limitations, it can be helpful in elucidating concepts and offering basic guidance. It cannot access original medical databases or offer the most recent scientific knowledge. Public opinion in a number of areas, including healthcare, education, manufacturing, artificial intelligence (AI), machine revolution, science, industry, and cyber security, has been scrutinizing ChatGPT. However, there hasn't been as much research done on the analysis of ChatGPT studies in these settings. The present chapter looks at the various potential applications, difficulties, and upcoming projects for ChatGPT. A synopsis of studies on ChatGPT in the literature on applications is given by this review. The authors also offer a thorough and original assessment of ChatGPT's future concerns in Diff applications, including their promise and limitations.

  • Research Article
  • 10.1080/17482798.2024.2438679
Generative AI and children’s digital futures: New research challenges
  • Dec 27, 2024
  • Journal of Children and Media
  • Tama Leaver + 1 more

Generative AI and children’s digital futures: New research challenges

  • Research Article
  • 10.1080/21670811.2025.2522281
Less Hype, More Drama: Open-Ended Technological Inevitability in Journalistic Discourses About AI in the US, The Netherlands, and Brazil
  • Jun 18, 2025
  • Digital Journalism
  • João C Magalhães + 1 more

This article examines the portrayal of Artificial Intelligence (AI) in journalistic discourses, nuancing assumptions that such coverage constitutes systematic media hype. Building on Pfaffenberger’s (1992) concept of technological drama, we conducted a qualitative textual analysis of AI coverage in three newspapers of record—The New York Times (US), De Volkskrant (Netherlands), and Folha de S.Paulo (Brazil)—focusing on the period between June 2020 and September 2023. The findings indicate that these depictions constitute a multi-faceted drama whose importance is, however, at no point disputed. We theorize this phenomenon as a form of open-ended technological inevitability, where AI’s impact is seen as unavoidable but its trajectory remains undecided.

  • Book Chapter
  • Cite Count Icon 3
  • 10.4018/979-8-3693-0724-3.ch007
Cyber Security Challenges and Dark Side of AI
  • Jan 26, 2024
  • Nitish Kumar Ojha + 2 more

Experts believe that cyber security is a field in which trust is a volatile phenomenon because of its agnostic nature, and in this era of advanced technology, where AI is behaving like a human being, when both meet, everything is not bright. Still, things are scarier in the next upcoming wave of AI. In a time when offensive AI is inevitable, can we trust AI completely? In this chapter, the negative impact of AI has been reviewed.

  • Open Access Icon
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 15
  • 10.1007/s43681-024-00461-2
The mechanisms of AI hype and its planetary and social costs
  • Apr 2, 2024
  • AI and Ethics
  • Alva Markelius + 4 more

Our global landscape of emerging technologies is increasingly affected by artificial intelligence (AI) hype, a phenomenon with significant large-scale consequences for the global AI narratives being created today. This paper aims to dissect the phenomenon of AI hype in light of its core mechanisms, drawing comparisons between the current wave and historical episodes of AI hype, concluding that the current hype is historically unmatched in terms of magnitude, scale and planetary and social costs. We identify and discuss socio-technical mechanisms fueling AI hype, including anthropomorphism, the proliferation of self-proclaimed AI “experts”, the geopolitical and private sector “fear of missing out” trends and the overuse and misappropriation of the term “AI” in emerging technologies. The second part of the paper seeks to highlight the often-overlooked costs of the current AI hype. We examine its planetary costs as the AI hype exerts tremendous pressure on finite resources and energy consumption. Additionally, we focus on the connection between AI hype and socio-economic injustices, including perpetuation of social inequalities by the huge associated redistribution of wealth and costs to human intelligence. In the conclusion, we offer insights into the implications for how to mitigate AI hype moving forward. We give recommendations of how developers, regulators, deployers and the public can navigate the relationship between AI hype, innovation, investment and scientific exploration, while addressing critical societal and environmental challenges.

  • Book Chapter
  • 10.4018/979-8-3693-6547-2.ch006
AI-Based Chatbots as Civil Servants
  • Jan 24, 2025
  • Halil Yasin Tamer

In this chapter, goverments use of artificial intelligence-based (AI-based) chatbots analysis with the perspective of utopian or dystopian insights for digital competencies of governments. Thus, the aim of this chapter is to identify the main characteristics of the use of AI-based chatbots in government and to determine whether this issue should be evaluated with a utopian or dystopian approach. The rapid development of chatbots has started to create a new civil servant mechanism in digital government. Civil servant, public interest-value, public service and technological determinism that chatbots, which can be seen as the determinant of digital competencies as the main elements of the digital government. This dimension will create for the state and society a utopian community, while the potential risks and problems related to cyber centralisation are dystopian. In this context, chatbots of selected countries in digital government services are compared with evaluation.

  • Research Article
  • 10.1177/13548565251333212
The dystopian imaginaries of ChatGPT: A designed cycle of fear
  • Apr 18, 2025
  • Convergence: The International Journal of Research into New Media Technologies
  • Brent Lucia + 2 more

The advent of OpenAI’s ChatGPT in 2022 catalyzed a wave of excitement and apprehension, but especially fear. This article examines the dystopian narratives that emerged after ChatGPT’s release date. Through a critical analysis of media responses, we uncover how dystopian imaginaries discussing ChatGPT become rhetorically constructed in popular, journalistic discourse. The article locates prevalent anxieties surrounding ChatGPT’s unprecedented text-generation capabilities, and identifies recurrent fears regarding academic integrity, the proliferation of misinformation, ethical dilemmas in human-AI interaction, and the perpetuation of social biases. Moreover, the article introduces the concept of ‘fear cycles’ – recurring patterns of dystopian projections in response to emerging technologies. By documenting and dissecting these fear cycles, we offer insights into the underlying rhetorical features that drive societal reactions to technological advancements. The research ultimately contributes to a nuanced understanding of how ChatGPT dystopian imaginaries develop particular futures, while grounding the present in predictable anxieties related to technological innovation.

  • Research Article
  • 10.1007/s00146-025-02398-4
Ethical and epistemic implications of artificial intelligence in medicine: a stakeholder-based assessment
  • May 25, 2025
  • AI & SOCIETY
  • Jonathan Adams

Abstract As artificial intelligence (AI) technologies become increasingly embedded in high-stakes fields such as healthcare, ethical and epistemic considerations raise the need for evaluative frameworks to assess their societal impacts across multiple dimensions. This paper uses the ethical-epistemic matrix (EEM), a structured framework that integrates both ethical and epistemic principles, to evaluate medical AI applications more comprehensively. Building on the ethical principles of well-being, autonomy, justice, and explicability, the matrix introduces epistemic principles—accuracy, consistency, relevance, and instrumental efficacy—that assess AI’s role in knowledge production. This dual approach enables a nuanced assessment that reflects the diverse perspectives of stakeholders within the medical field—patients, clinicians, developers, the public, and health policy-makers—who assess AI systems differently based on distinct interests and epistemic goals. Although the EEM has been outlined conceptually before, no published research paper has yet used it explore the ethical and epistemic implications arising in its key intended application domain of AI in medicine. Through a systematic demonstration of the EEM as applied to medical AI, this paper argues that it encourages a broader understanding of AI’s implications and serves as a valuable methodological tool for evaluating future uses. This is illustrated with the case study of AI systems in sleep apnea detection, where the EEM highlights the ethical trade-offs and epistemic challenges that different stakeholders may perceive, which can be made more concrete if the tool is embedded in future technical projects.

  • Research Article
  • 10.24310/crf.16.2.2024.19654
¿Singularidad? Limitaciones, capacidades y diferencias de la inteligencia artificial frente a la inteligencia humana
  • Dec 15, 2024
  • Claridades. Revista de Filosofía
  • Pablo Carrera

En este artículo nos planteamos las cuestiones de si realmente la IA ha alcanzado el nivel de la inteligencia humana, algunas de las razones que nos llevan a este estado de opinión, así como varias de las diferencias fundamentales entre la IA y la inteligencia humana. Realizamos un breve recorrido del desarrollo histórico de la IA, para después revisar las capacidades reales e importantes limitaciones de las técnicas de aprendizaje profundo en las que se basan los avances recientes en IA. Abordamos particularmente el argumento de que las capacidades cognitivas complejas son indisociables de un cuerpo biológico en interacción con un mundo físico y sociocultural, frente a una IA basada en un axioma dualista y cognitivista que ha sido señalado como incompleto o parcial. Finalizamos con considerando los riesgos reales de la IA en la actualidad, así como algunas especulaciones sobre su futuro desarrollo.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 7
  • 10.1080/25741136.2024.2355597
Generative-AI, the media industries, and the disappearance of human creative labour
  • May 22, 2024
  • Media Practice and Education
  • Stuart Bender

ABSTRACT This article addresses the transformative role of Generative-AI (Gen-AI) in the creative media and arts industries, focusing on concerns about the disappearance of human creative labour. It critically examines the discourse of the 2023 Writers’ and Actors’ strikes, which replicates prevailing assumptions of the superiority of human creativity over Gen-AI. This discourse emphasises a ‘replacing tasks’ model, anticipating a future where AI assists human creatives in a limited capacity. Against this background, the article applies the ‘meaningful work’ framework to provide an approach to human-AI coexistence which values amplifying human creativity rather than merely supplementing (or supplanting) it. This framework is a conceptual shift which more convincingly recognises and values human contributions in the media industries. Drawing on historical parallels, such as the transition to digital visual effects during the production of Jurassic Park (1992), the article demonstrates how transformational technologies can transcend mere task simplification. The article underscores the importance of creative artists actively finding ways to begin clearly articulating the specific details of human creativity that comprise their artistic agency, and therefore the article advocates an approach that theorises the intrinsic value of human contributions to the media industries while accommodating Gen-AI.

Similar Papers
  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Research Article
  • 10.6087/kcse.352
Ethical guidelines for the use of generative artificial intelligence and artificial intelligence-assisted tools in scholarly publishing: a thematic analysis
  • Feb 5, 2025
  • Science Editing
  • Adéle Da Veiga

Purpose: This analysis aims to propose guidelines for artificial intelligence (AI) research ethics in scientific publications, intending to inform publishers and academic institutional policies in order to guide them toward a coherent and consistent approach to AI research ethics.Methods: A literature-based thematic analysis was conducted. The study reviewed the publication policies of the top 10 journal publishers addressing the use of AI in scholarly publications as of October 2024. Thematic analysis using Atlas.ti identified themes and subthemes across the documents, which were consolidated into proposed research ethics guidelines for using generative AI and AI-assisted tools in scholarly publications.Results: The analysis revealed inconsistencies among publishers’ policies on AI use in research and publications. AI-assisted tools for grammar and formatting are generally accepted, but positions vary regarding generative AI tools used in pre-writing and research methods. Key themes identified include author accountability, human oversight, recognized and unrecognized uses of AI tools, and the necessity for transparency in disclosing AI usage. All publishers agree that AI tools cannot be listed as authors. Concerns involve biases, quality and reliability issues, compliance with intellectual property rights, and limitations of AI detection tools.Conclusion: The article highlights the significant knowledge gap and inconsistencies in guidelines for AI use in scientific research. There is an urgent need for unified ethical standards, and guidelines are proposed for distinguishing between the accepted use of AI-assisted tools and the cautious use of generative AI tools.

  • Discussion
  • Cite Count Icon 6
  • 10.1016/j.ebiom.2023.104672
Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".
  • Jul 1, 2023
  • eBioMedicine
  • Stefan Harrer

Response to M. Trengove & coll regarding "Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine".

  • Front Matter
  • Cite Count Icon 10
  • 10.1016/j.jval.2021.12.009
The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned
  • Jan 31, 2022
  • Value in Health
  • Danielle Whicher + 1 more

The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned

  • Research Article
  • 10.12688/mep.20554.3
Utilisation of ChatGPT and other Artificial Intelligence tools among medical faculty in Uganda: a cross-sectional study
  • Apr 28, 2025
  • MedEdPublish
  • David Mukunya + 18 more

Background ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (&gt;50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.

  • Research Article
  • 10.12688/mep.20554.2
Utilisation of ChatGPT and other Artificial Intelligence tools among medical faculty in Uganda: a cross-sectional study.
  • Jan 23, 2025
  • MedEdPublish (2016)
  • David Mukunya + 18 more

ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants' socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34-50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.

  • PDF Download Icon
  • Front Matter
  • Cite Count Icon 4
  • 10.1016/s2589-7500(22)00068-1
Holding artificial intelligence to account
  • Apr 5, 2022
  • The Lancet Digital Health
  • The Lancet Digital Health

Holding artificial intelligence to account

  • Research Article
  • 10.51702/esoguifd.1583408
Ethical and Theological Problems Related to Artificial Intelligence
  • May 15, 2025
  • Eskişehir Osmangazi Üniversitesi İlahiyat Fakültesi Dergisi
  • Necmi Karslı

Artificial intelligence is defined as the totality of systems and programs that imitate human intelligence and can eventually surpass this intelligence over time. The rapid development of these technologies has raised various ethical debates such as moral responsibility, privacy, bias, respect for human rights, and social impacts. This study examines the technical infrastructure of artificial intelligence, the differences between weak and strong artificial intelligence, ethical issues, and theological dimensions in detail, providing a comprehensive perspective on the role of artificial intelligence in human life and the problems it brings. The historical development of artificial intelligence has been shaped by the contributions of various disciplines such as mathematical logic, cognitive science, philosophy, and engineering. From the ancient Greek philosophers to the present day, thoughts on artificial intelligence have raised deep philosophical questions such as human nature, consciousness, and responsibility. The algorithms developed by Alan Turing have contributed to the modern shaping of artificial intelligence and have put forward the first models to assess whether machines have human-like intelligence, such as the “Turing Test”. The study first analyzes the technical infrastructure of artificial intelligence in detail and discusses the current limits and potential of the technology through the distinction between weak and strong artificial intelligence. Weak artificial intelligence includes systems designed to perform specific tasks and do not exhibit general intelligence outside of those tasks, while strong artificial intelligence refers to systems with human-like general intelligence and flexible thinking capacity. Most of the widely used artificial intelligence applications today fall into the category of weak artificial intelligence. However, the development of strong artificial intelligence brings various ethical and theological consequences for humanity. The ethical issues of artificial intelligence include fundamental topics such as autonomy, responsibility, transparency, fairness, and privacy. The decision-making processes of autonomous systems raise serious ethical questions at the societal level. Especially autonomous weapons and artificial intelligence-managed justice systems raise concerns in terms of human rights and individual freedoms. In this context, the ethical framework of artificial intelligence has deep impacts on the future of humanity and human-machine interaction, not just limited to technological boundaries. From a theological perspective, the ability of artificial intelligence to imitate the human mind and creative processes raises deep theological issues such as the creativity of God, the place of human beings in the universe, and consciousness. The questions of whether artificial intelligence systems can gain consciousness and whether these conscious systems can have a spiritual status have led to new debates in theology and philosophy. The ethical principles of artificial intelligence are shaped around principles such as transparency, accountability, autonomy, human control, and data management. In conclusion, determining the ethical and theological principles that need to be considered in the development and application of artificial intelligence is critical for the future of humanity. A comprehensive examination of the ethical and theological dimensions of artificial intelligence technologies is necessary to understand and manage the social impacts of this technology. This study emphasizes the necessity of an interdisciplinary approach for the development of artificial intelligence in harmony with social values and for the benefit of humanity. The study provides an important theoretical framework for future research by shedding light on the complex ethical and theological issues arising from the development and widespread use of artificial intelligence.

  • Research Article
  • Cite Count Icon 1
  • 10.12688/mep.20554.1
Utilisation of ChatGPT and other Artificial Intelligence tools among medical faculty in Uganda: a cross-sectional study
  • Oct 23, 2024
  • MedEdPublish
  • David Mukunya + 18 more

Background ChatGPT is an open-source large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (&gt;50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.

  • Research Article
  • 10.1186/s40561-025-00406-0
How do generative artificial intelligence (AI) tools and large language models (LLMs) influence language learners’ critical thinking in EFL education? A systematic review
  • Aug 4, 2025
  • Smart Learning Environments
  • Jing Liu + 2 more

As generative artificial intelligence (AI) tools and large language models (LLMs)-powered applications develop rapidly in the era of algorithms, it should be integrated thoughtfully to enhance English as a Foreign Language (EFL) teaching and learning without replacing learners’ critical thinking (CT). This study systematically analyzes the impact of generative AI tools and LLMs on language learners’ CT in EFL education using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to identify, evaluate, and synthesize relevant studies from 2022 to 2025. A thorough review of 15 selected studies focuses on generative AI tools and LLMs’ dual nature, research methods, main focuses, theory and models, limitations and challenges, and future directions in the field based on Web of Science (WoS), SCOPUS, ERIC, ProQuest, and Google Scholar. The findings identified generative AI tools and LLMs possessed both the potential to nurture and the risk of hindering CT in EFL education. 66.67% of studies reported generative AI tools and LLMs’ positive role in CT, while 33.33% of studies reported its negative role in CT. Furthermore, 3 types of research methods, 3 key themes of research focus, and 4 groups of theoretical perspectives were examined. However, 4 kinds of limitations in this field remain, including research scope, user dependency, generative AI reliability, and pedagogical integration. Future research can focus on assessing long-term effects, broadening research scope, promoting responsible AI use, and refining pedagogical strategies. Finally, Limitations, implications and future direction of this study were discussed.

  • Research Article
  • 10.69554/fmai7138
Pay attention to the chatbot behind the curtain when AI ‘is no place like home’ : A framework and toolkit for integrating critical thinking and information literacy in educational and professional settings
  • Mar 1, 2025
  • Advances in Online Education: A Peer-Reviewed Journal
  • Araminta Star Matthews + 1 more

Over the past three decades, the evolution of technology has dramatically reshaped the information landscape, making it easier to access and simultaneously easier to distort. The advent of artificial intelligence (AI), particularly generative tools like ChatGPT and CoPilot, has further complicated the pursuit of information literacy, posing significant challenges for educators, librarians and students alike. This paper explores the implications of integrating generative AI (GenAI) tools into educational and professional settings, emphasising the necessity of critical thinking and the development of robust information literacy skills to discern the credibility and authority of AI-generated content. By examining the Association of College and Research Libraries’ (ACRL) ‘Framework for Information Literacy for Higher Education’, this paper provides strategies to identify risk areas related to AI integration as well as produce use cases for large language model (LLM) GenAI tools, including a flowchart for determining when to make use of GenAI, a toolkit for positive/effective use cases, and a rubric for assessing information literacy and critical thinking. While AI tools can offer valuable educational opportunities, their propensity to generate misleading or inaccurate information necessitates a careful and informed approach to their use. This paper concludes with a call for ongoing vigilance in maintaining academic integrity and underscores the importance of continuously questioning the reliability of AI outputs in educational contexts.

  • Research Article
  • Cite Count Icon 8
  • 10.1177/02734753241302459
Generative AI in Higher Education Assessments: Examining Risk and Tech-Savviness on Student’s Adoption
  • Dec 23, 2024
  • Journal of Marketing Education
  • Yusuf Oc + 2 more

The integration of generative artificial intelligence (AI) tools is a paradigm shift in enhanced learning methodologies and assessment techniques. This study explores the adoption of generative AI tools in higher education assessments by examining the perceptions of 353 students through a survey and 17 in-depth interviews. Anchored in the Unified Theory of Acceptance and Use of Technology (UTAUT), this study investigates the roles of perceived risk and tech-savviness in the use of AI tools. Perceived risk emerged as a significant deterrent, while trust and tech-savviness were pivotal in shaping student engagement with AI. Tech-savviness not only influenced adoption but also moderated the effect of performance expectancy on AI use. These insights extend UTAUT’s application, highlighting the importance of considering perceived risks and individual proficiency with technology. The findings suggest educators and policymakers need to tailor AI integration strategies to accommodate students’ personal characteristics and diverse needs, harnessing generative AI’s opportunities and mitigating its challenges.

  • PDF Download Icon
  • Research Article
  • 10.14742/apubs.2024.1196
Students as collaborative partners
  • Nov 11, 2024
  • ASCILITE Publications
  • Yasaman Mohammadi + 2 more

In contemporary society, Artificial Intelligence (AI) pervades numerous facets of our lives and is likely to impact many sectors and professions, including education. Tertiary-level students in particular face challenges regarding the use of AI for studies and assessment, including limited understanding of AI tools, as well as a lack of deep critical engagement with AI for learning (Shibani et al., 2024). To respond to emerging developments in generative AI, the recent Australian Tertiary Education Quality and Standards Agency (TEQSA) report suggests tertiary-level learning and assessments be designed to foster responsible and ethical use of AI (Lodge et al., 2023). This involves the development of AI literacy among students to engage with AI in critical, ethical ways that aid their learning and not hinder it. Our project aims to narrow the AI literacy gap among students from diverse study backgrounds by providing foundational knowledge and developing critical skills for practical use of AI tools for learning and professional practice, in collaboration with students and academics as part of a Students as Partners (SAP) initiative. Staff bring expertise in AI critical engagement and students bring practical, first-hand experiences of learning in this collaboration, supported by the university’s SAP program. Building on the current UNESCO recommendations for the use of generative AI in education (UNESCO, 2023) and prior theoretical frameworks on AI literacy (Chiu et al., 2024; Ng, et al., 2021; Southworth et al., 2023) we target key skills that higher education students should develop to meaningfully engage with AI. By creating accessible and engaging resources, such as instructional videos and comprehensive guides on generative AI applications like ChatGPT and ways to prompt for enhancing learning, we introduce existing AI tools and teach students to use them effectively, promoting a hands-on learning environment. Using learning design principles, the developed curriculum will be presented in an AI Literacy module on a Canvas site, with supporting instruction workshopped with student participants for evaluation. Student cohorts recruited from diverse disciplines will pilot and assess the effectiveness of the program, and qualitative methods such as focus groups and interviews will be used for evaluation of our intervention and continuous improvements. Findings will inform tertiary students’ current level of AI literacy and the effectiveness of interventions to improve key skills beyond their disciplinary knowledge, better preparing them for life beyond university. Indeed, the implementation of similar AI literacy courses has demonstrated statistically significant improvements in AI literacy and understanding of AI concepts amongst university students (Kong, Cheung, &amp; Zhang, 2021). Our approach underscores the importance of relational engagement in higher education with students as partners (Matthews, 2018) and participatory design with students in a topic that is significant in the current age of AI (Laupichler et al., 2022). The course's flexibility to be accessed directly or embedded into other curricula ensures scalability and broader impact, solidifying the validity of our multifaceted approach. Through utilising relevant research methodologies and learning design principles, we endeavour to create an AI literacy course that is robust, accessible, educational, and engaging to use by tertiary-level students from diverse study backgrounds.

  • Research Article
  • Cite Count Icon 12
  • 10.1177/27526461231215083
To use or not to use ChatGPT and assistive artificial intelligence tools in higher education institutions? The modern-day conundrum – students’ and faculty’s perspectives
  • Nov 11, 2023
  • Equity in Education &amp; Society
  • Charmaine Bissessar

Students’ use of Artificial Intelligence tools to complete assignments spawns issues in academic integrity. The purpose of this study was to explore students’ and faculty’s perspectives on the benefits and challenges of using ChatGPT and assistive Artificial intelligence (AI) tools to complete assignments. This descriptive phenomenological qualitative methodology study encompassed interviews with eight students who used Large Language Models (LLMs) AI tools to complete their assignments and nine students who did not. It also contains interviews with six Faculty and their perspectives on students’ use of Large Language Models (LLMs) AI tools to complete their assignments and their thoughts on the benefits and challenges. The participants were purposively selected. The data were coded based on Braun and Clarke’s (2013) six steps in thematic analysis. Descriptive, in vivo, and evaluative coding were used. Additionally, data were examined semantically and latently using reductionist analysis to determine the final themes. Five components of the Unified Theory of Acceptance and Use of Technology (UTTAUT) were applied to the data collected and provided the framework for the study. Behavioural intention served as the foundation. Effort and Performance Expectancies, and facilitating conditions were exemplified in participants’ responses about the use of ChatGPT, Grammarly, and other AI assistive tools, plagiarism/academic integrity, and social influence were indicated when participants (both Students and Faculty) suggested the need for the development of policies and procedures toward the appropriate use of AI tools. Effort and performance expectancies and habits were found in the data collected in the form of consideration of the pros of using AI tools such as ChatGPT and assistive tools. These include the time saved by generating information, examples for both students and Faculty, and help in the teaching/learning process, and one participant found that it motivated her. The cons cited were students’ lack of creativity and the inability to think critically, the cost of the AI assistive tools (related to the component Price), the bandwidth needed to use them, the digital divide, and the false information generated. This study has significance for the use of ChatGPT and assistive AI tools in education and the ethical implications. It is recommended that specific policies be established and enacted to ensure the appropriate use of assistive and Artificial Intelligence (LLMs) tools.

  • Research Article
  • Cite Count Icon 1
  • 10.58600/eurjther1880
Should We Wait for Major Frauds to Unveil to Plan an AI Use License?
  • Dec 22, 2023
  • European Journal of Therapeutics
  • Istemihan Coban

Should We Wait for Major Frauds to Unveil to Plan an AI Use License?

More from: M/C Journal
  • Research Article
  • 10.5204/mcj.3228
The Value of Vlogs
  • Oct 22, 2025
  • M/C Journal
  • Ümit Kennedy

  • Research Article
  • 10.5204/mcj.3163
Platform-Driven Vlogging
  • Oct 20, 2025
  • M/C Journal
  • Jia Guo + 1 more

  • Research Article
  • 10.5204/mcj.3196
Disenchantment and Re-Enchantment of YouTube Vlogging
  • Oct 20, 2025
  • M/C Journal
  • Patricia G Lange

  • Research Article
  • 10.5204/mcj.3188
YouTube and Radical Change
  • Oct 20, 2025
  • M/C Journal
  • Michael Strangelove

  • Research Article
  • 10.5204/mcj.3200
Reaction Videos, Researcher Positionality, and Falling Back in Love with Vlogging
  • Oct 20, 2025
  • M/C Journal
  • Renata Lisowski

  • Research Article
  • 10.5204/mcj.3194
Methods and Ethics for Research about Women’s Vlogs that Disclose Experiences of Sexual Violence
  • Oct 19, 2025
  • M/C Journal
  • Carol Harrington

  • Research Article
  • 10.5204/mcj.3203
Reflections on a Novel Method for Exploring Audience Reception of Fictional “Vlogs”
  • Oct 19, 2025
  • M/C Journal
  • Caitlin Adams

  • Research Article
  • 10.5204/mcj.3214
From &lt;em&gt;Love Meetings&lt;/em&gt; (Pier Paolo Pasolini, 1964) to Vlogs
  • Oct 19, 2025
  • M/C Journal
  • Stefano Odorico

  • Research Article
  • 10.5204/mcj.3201
The Ethics of Accidental Vlogs
  • Oct 19, 2025
  • M/C Journal
  • Ryan Mcgrady + 1 more

  • Research Article
  • 10.5204/mcj.3193
Cultural Adaptation on YouTube
  • Oct 19, 2025
  • M/C Journal
  • Sevda Kaya Kitınur

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon