Impact of generative artificial intelligence tools on the students learning style and their learning outcomes
Impact of generative artificial intelligence tools on the students learning style and their learning outcomes
- Research Article
34
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w
- Research Article
8
- 10.9734/ajrcos/2024/v17i7491
- Jul 30, 2024
- Asian Journal of Research in Computer Science
With the increasing use of Generative Artificial Intelligence (AI) tools like ChatGPT and Bard, universities face challenges in maintaining academic integrity. This research investigates the impact of these tools on learning outcomes (factual knowledge, comprehension, critical thinking) in selected universities of Ghana's Upper East Region during the 2023-2024 academic year. The study specifically analyzes changes in student comprehension and academic integrity concerns when using Generative AI for content generation, research assistance, and summarizing complex topics. A mixed-methods approach was employed, combining qualitative data from interviews and open-ended questions with quantitative analysis of survey data and academic records. The research focuses on three institutions: C. K. Tedam University of Technology and Applied Sciences, Bolgatanga Technical University, and Regentropfen University College. A purposive sampling technique recruited 150 participants (50 from each university) who had used Generative AI tools. Key findings show that 72% of students reported improved understanding of course material through Generative AI use, yet 75% cited academic integrity as a primary concern. Quantitative analysis revealed a weak to moderate positive correlation (r = 0.45) between AI tool usage and improved grades, with variations depending on the specific AI tasks performed. Qualitative data highlighted concerns about overreliance on AI and its impact on critical thinking skills. This research contributes to the ongoing debate on AI's role in education by providing valuable insights for educators and policymakers worldwide. The findings suggest that while AI tools can enhance comprehension, ethical considerations and potential drawbacks related to critical thinking require careful attention. The study concludes with recommendations for integrating AI literacy programs, developing ethical guidelines, and implementing advanced plagiarism detection systems to harness the benefits of Generative AI while mitigating risks to academic integrity. Although specific to the Upper East Region of Ghana, these insights may be applicable to other educational systems with similar characteristics.
- Research Article
36
- 10.14742/ajet.8902
- Dec 22, 2023
- Australasian Journal of Educational Technology
In this study, we introduce a framework designed to help educators assess the effectiveness of popular generative artificial intelligence (AI) tools in solving authentic assessments. We employed Bloom’s taxonomy as a guiding principle to create authentic assessments that evaluate the capabilities of generative AI tools. We applied this framework to assess the abilities of ChatGPT-4, ChatGPT-3.5, Google Bard and Microsoft Bing in solving authentic assessments in economics. We found that generative AI tools perform very well at the lower levels of Bloom's taxonomy while still maintaining a decent level of performance at the higher levels, with “create” being the weakest level of performance. Interestingly, these tools are better able to address numeric-based questions than text-based ones. Moreover, all the generative AI tools exhibit weaknesses in building arguments based on theoretical frameworks, maintaining the coherence of different arguments and providing appropriate references. Our study provides educators with a framework to assess the capabilities of generative AI tools, enabling them to make more informed decisions regarding assessments and learning activities. Our findings demand a strategic reimagining of educational goals and assessments, emphasising higher cognitive skills and calling for a concerted effort to enhance the capabilities of educators in preparing students for a rapidly transforming professional environment. Implications for practice or policy Our proposed framework enables educators to systematically evaluate the capabilities of widely used generative AI tools in assessments and assist them in the assessment design process. Tertiary institutions should re-evaluate and redesign programmes and course learning outcomes. The new focus on learning outcomes should address the higher levels of educational goals of Bloom’s taxonomy, specifically the “create” level.
- Conference Article
1
- 10.54941/ahfe1004957
- Jan 1, 2024
In the dynamic field of programming education, integrating artificial intelligence (AI) tools has started to play a significant role in enhancing learning experiences. This paper presents a case study conducted during a foundational programming course for first-year students in higher education, where students were encouraged to utilize generative artificial intelligence programming copilot extensions in their programming IDE and browser-based generative AI tools as supportive AI tools. The primary objective was to observe the impact of AI on the learning curve and the overall educational experience.Key findings suggest that the introduction of AI tools significantly altered the learning experience for students. Many who initially struggled with grasping elementary programming concepts found that AI support made understanding basic programming concepts much easier, enhancing their confidence and skills. This was particularly evident in the reduced levels of anxiety typically associated with early programming learning, as the AI copilot provided a non-judgmental, always-available source for clarifying doubts, including queries that students might hesitate to ask in a traditional classroom setting.Notably, some students leveraged the AI to generate similar exercise problems, reinforcing their understanding and skills. The AI's capability to address basic queries also freed up the instructor's time, allowing for more personalized student guidance in more advanced problems. This shift in the instructional dynamic further contributed to a learning environment where students felt more comfortable engaging with complex topics, thereby reducing the psychological barriers often linked with early-stage programming education.The course's structure, enriched by AI, enabled students to delve into more complex programming constructs earlier than traditional curricula would allow. For instance, students were tasked with simulating basic e-commerce operations, such as user registration, product browsing, and cart functionalities. These practical challenges naturally introduced advanced concepts like external data storage, unit testing, and user interface design, which are typically reserved for more advanced courses. With the help of generative AI programming copilot tools, students at any programming skill level were able to develop nearly functional complex structures. Interestingly, even when their projects were not fully functional, students remained motivated. Instead of feeling discouraged by these imperfect outcomes, they showed resilience and a keen interest in understanding and improving their code. This reaction is a significant shift from traditional learning settings, where unfinished or flawed projects often lead to increased anxiety or a drop in motivation.Furthermore, the AI's proactive suggestions inspired students to explore beyond the curriculum. Advanced learners delved into databases, cryptography libraries in Python, and even more advanced user interface design, ensuring that they remained engaged and challenged. This elementary course, enhanced by generative AI tools, also inspired students to learn other programming languages since they now learned that individual learning is more available with the aid of generative AI.In conclusion, the integration of AI in programming education offers a promising avenue for enhancing both the learning experience and outcomes. This case study underscores the potential of AI to revolutionize traditional teaching methodologies, fostering a more dynamic, responsive, and inclusive learning environment.This paper handles the results, possibilities and challenges of AI empowered education in programming. It also gives practical examples as well as future research perspectives.
- Research Article
- 10.2478/ctra-2025-0007
- Jan 1, 2025
- Creativity. Theories – Research - Applications
The expansion of artificial intelligence (AI) tools has brought about new opportunities and challenges for teachers and students. These tools have the potential to reshape teaching and stimulate both students’ and teachers’ creativity. In 21st-century education, creativity emerges as a key skill that encompasses problem-solving, innovation, adaptability, critical thinking, and cognitive development. AI tools also provide personalized assistance and feedback as well as customized study materials. Moreover, they have proven beneficial in cultivating critical thinking and enhancing students’ research skills. Instead of questioning teachers’ preparedness for AI technologies, the focus should be on discovering ways to effectively and creatively integrate these tools into the classroom. This paper explores the possibilities of implementing generative AI tools to promote students’ creativity, thus enhancing the overall quality of teaching. In the Croatian educational system, similarly to Poland, school pedagogues should encourage positive changes within the school culture. Therefore, this paper also underscores the role of school pedagogues in bridging the gap between teachers and AI tools as an educational innovation. School pedagogues should be instrumental in supporting teachers during the integration of AI tools into their teaching by showcasing practical applications and emphasizing potential benefits for student engagement and learning outcomes. In this capacity, school pedagogues bear the responsibility of fostering a reflective and critical approach towards AI tools, advocating creative yet responsible use of technology in the classroom.
- Research Article
- 10.59994/pau.2025.si.93
- Sep 14, 2025
- Journal of Palestine Ahliya University for Research and Studies
This study aimed to analyze the impact of learning to use generative artificial intelligence tools on enhancing the educational experience and learning outcomes among postgraduate students. It also examined the opportunities and challenges associated with integrating these tools and explored their implications for the quality and effectiveness of higher education. The study adopted a qualitative research approach based on grounded theory principles, with the aim of providing an in-depth and systematic analysis of the data. The purposive sample consisted of 36 students enrolled in the Master’s programs in Human Resource Management and Digital Business Administration at Al-Quds Open University. The data collection instrument was designed using Google Forms and comprised five sections that included semi-structured questions (both open- and close-ended) to elicit students’ opinions and experiences. The data were analyzed using MAXQDA software, which contributed to ensuring accuracy and systematic organization of the results. The findings revealed that generative AI tools play a prominent role in enhancing the educational experience and improving learning outcomes, as these tools facilitated access to information and supported learning processes. Nevertheless, the study indicated the presence of several challenges. In light of the results, the study presented a set of recommendations, including the development of a framework for adopting generative AI tools in postgraduate programs at Al-Quds Open University and similar academic institutions, as well as establishing training programs for faculty members first, followed by students. The originality of this study lies in its contribution to bridging the research gap on the use of generative AI tools in Palestinian higher education by highlighting their impact on improving the learning experience and outcomes, while also uncovering the challenges and opportunities associated with their integration.
- Research Article
- 10.52783/jisem.v10i38s.6956
- Apr 22, 2025
- Journal of Information Systems Engineering and Management
Focusing on the generative artificial intelligence (AI) tools using large language models, the present research explores the individual factors impacting the low-income Koreans’ attitudes toward generative AI tools utilized to search for information (e.g., ChatGPT, Google Gemini, etc.). Specifically, we first examine whether low-income Korean individuals’ prior AI experience, perceived usefulness of AI, and general attitude toward AI influence their attitudes toward generative AI tools (RQ1). Second, we examine whether the prior AI experience affects the attitude toward generative AI tools via the perceived usefulness of AI (RQ2). Third, we examine whether the prior AI experience influences the attitude toward generative AI tools via the general attitude toward AI (RQ3). Fourth, we examine whether the relationship between prior AI experience and attitude toward generative AI tools is serially mediated by the perceived usefulness of AI and general attitude toward AI (RQ4). To answer the research questions, we conducted a hierarchical multiple regression analysis and a mediation analysis using the low-income Koreans who were aware of the generative AI tools (n = 770). The results indicate that (1) both the perceived usefulness of AI and general attitude toward AI are positively associated with the attitude toward generative AI tools; (2) the perceived usefulness of AI mediates the relationship between the prior AI experience and the attitude toward generative AI tools; (3) the indirect effect of prior AI experience on the attitude toward generative AI tools, via the general attitude toward AI, is not statistically significant; and (4) the prior AI experience influences the attitude toward generative AI tools through a sequential process of the perceived usefulness of AI and general attitude toward AI. The findings provide important implications to enhance the attitude toward generative AI tools.
- Research Article
- 10.1108/jeet-06-2025-0040
- Oct 14, 2025
- Journal of Ethics in Entrepreneurship and Technology
Purpose This study aims to examine the ethical impact of generative artificial intelligence (AI) tools on human relationships and community life. It explores how AI-mediated interactions can reshape essential social practices, particularly in emotionally meaningful or developmentally formative spaces. Drawing on interdisciplinary research and moral philosophy, the article introduces the REAL Framework: Retained, Eroded, Atrophied and Leveraged. This model helps evaluate the relational consequences of emerging technologies. The purpose is to provide educators, institutional leaders and technology designers with a critical and practical tool for assessing whether generative AI tools support authentic human connection or subtly undermine it. Design/methodology/approach This article uses a conceptual and ethical analysis methodology, drawing from recent interdisciplinary literature in AI ethics, psychology and theology. Rather than presenting empirical findings, it offers a critical examination of how generative AI tools shape human relationships and community dynamics. The article synthesizes insights from scholarly research and cultural observation to develop the REAL Framework, a practical model for ethical evaluation. This approach allows for a reflective, theory-informed perspective that emphasizes relational integrity and communal well-being in the adoption and use of AI technologies. Findings The article finds that generative AI tools, while offering potential benefits, can also subtly distort or displace essential elements of human relationships. Through critical analysis, it identifies specific risks such as relational erosion, skill atrophy and the simulation of emotional intimacy without moral reciprocity. The REAL Framework, which stands for Retained, Eroded, Atrophied and Leveraged, serves as a practical tool to assess these relational impacts. Findings suggest that ethical evaluation of AI must move beyond technical concerns to consider the formation of individuals and communities. The framework helps users evaluate whether AI tools support or undermine authentic connections. Research limitations/implications This article presents a conceptual framework rather than empirical research, which limits the generalizability of its conclusions. While grounded in interdisciplinary scholarship, its findings are interpretive and intended to guide ethical reflection rather than predict outcomes. Future studies could test the REAL Framework across various cultural and technological contexts to assess its practical utility. Despite these limitations, the article offers valuable implications for educators, developers and institutional leaders. It encourages proactive, community-centered evaluation of generative AI tools and highlights the need for ethical discernment that prioritizes relational integrity and long-term human development over short-term technological efficiency. Practical implications The article provides a usable framework for evaluating the relational impact of generative AI tools within educational, organizational and community settings. The REAL Framework equips practitioners to ask targeted questions about whether a tool preserves essential human connection, erodes relational depth, weakens emotional skills or can be used to support authentic community. This model is especially relevant for educators, institutional leaders and developers who are navigating the integration of AI into emotionally significant environments. By applying the framework, stakeholders can make more informed, ethically responsible decisions that prioritize the dignity of persons and the health of human relationships. Social implications This article highlights the broader social implications of generative AI tools that increasingly shape human interaction, identity and community life. As AI systems mediate emotionally significant exchanges, there is a risk that relational authenticity may be replaced by simulation and convenience. The REAL Framework encourages reflection on how technology influences not only individual behavior but also collective values and social norms. Its application can help communities safeguard relational integrity, resist depersonalization and foster practices that strengthen human connection. The framework invites ongoing communal discernment about the kind of society being formed through the tools we choose to adopt. Originality/value This article offers an original contribution by introducing the REAL Framework as a practical tool for evaluating the relational and ethical impact of generative AI technologies. Unlike purely technical or utilitarian approaches, this model emphasizes the social and moral dimensions of AI use, particularly in emotionally formative and community-based contexts. The framework draws from interdisciplinary research and applies it to a timely cultural concern, offering a structured means of reflection for educators, leaders and designers. Its value lies in equipping stakeholders to move beyond efficiency-based assessments and instead prioritize the preservation of authentic human connection and communal well-being.
- Conference Article
- 10.5753/ihc.2025.10925
- Sep 8, 2025
Introduction: Students have widely used Generative Artificial Intelligence (AI) tools to assist them in their daily classroom activities and assignments (with or without the consent of their teachers). These tools are beneficial and, when used critically, can help students complete their tasks and better understand the various associated aspects. Objective: In this paper, we present an experience using AI tools to support the collection, analysis, and organization of user data in projects developed during a User Experience course in undergraduate computing programs. Methodology: The study involved two teachers and 99 students across three classes of the course. AI tools were integrated into project activities, and feedback was gathered from approximately half of the participants to assess their initial impressions. Results: The preliminary findings highlight the potential of generative AI tools to enhance student performance and learning in User Experience classes.
- Research Article
- 10.6087/kcse.352
- Feb 5, 2025
- Science Editing
Purpose: This analysis aims to propose guidelines for artificial intelligence (AI) research ethics in scientific publications, intending to inform publishers and academic institutional policies in order to guide them toward a coherent and consistent approach to AI research ethics.Methods: A literature-based thematic analysis was conducted. The study reviewed the publication policies of the top 10 journal publishers addressing the use of AI in scholarly publications as of October 2024. Thematic analysis using Atlas.ti identified themes and subthemes across the documents, which were consolidated into proposed research ethics guidelines for using generative AI and AI-assisted tools in scholarly publications.Results: The analysis revealed inconsistencies among publishers’ policies on AI use in research and publications. AI-assisted tools for grammar and formatting are generally accepted, but positions vary regarding generative AI tools used in pre-writing and research methods. Key themes identified include author accountability, human oversight, recognized and unrecognized uses of AI tools, and the necessity for transparency in disclosing AI usage. All publishers agree that AI tools cannot be listed as authors. Concerns involve biases, quality and reliability issues, compliance with intellectual property rights, and limitations of AI detection tools.Conclusion: The article highlights the significant knowledge gap and inconsistencies in guidelines for AI use in scientific research. There is an urgent need for unified ethical standards, and guidelines are proposed for distinguishing between the accepted use of AI-assisted tools and the cautious use of generative AI tools.
- Research Article
- 10.20853/39-3-6272
- Jan 1, 2025
- South African Journal of Higher Education
Generative artificial intelligence (AI) tools have sparked debates in the education sector prompting researchers to explore their desirability and potential in education. This paper acknowledges generative AI’s potential to support the delivery of teaching, learning and research in the higher education emphasising its ability to improve student writing quality as well as academic productivity, success rate, and independence. However, responsible use of these AI tools to support research is also crucial. Furthermore, the challenges associate with AI tool use, especially accessibility and usage in the African context, are recognised. For instance, ethical challenges relating to (mis)use of AI because no or inadequate policy regulations have been implemented. In addition, there are technical and structural challenges relating to connectivity, power outages, device access and technical know-how. Therefore, this paper aims to identify the opportunities and challenges associated with using AI tools to support research in African Higher Education classrooms. For the study, a qualitative systematic literature review was applied to two articles using thematic analysis from a final selection of 29 articles. Findings indicated that generative AI tools could enhance student writing skills and increase productivity. Additionally, they could lead to research autonomy, improved writing proficiency, quality, and academic throughput. Shortcomings included AI misuse, knowledge deficiencies, and infrastructural challenges preventing AI access. Additionally, inadequate regulations relating to using generative AI tools for learning and teaching were a further challenge. It is essential to address the ethical concerns, invest in skills development and promote equitable digital access, especially in Africa, where this is limited. In addition, the capability approach revealed how the digital divide limited the adoption of generative AI tools in Africa.
- Research Article
25
- 10.62411/jcta.9447
- Feb 26, 2024
- Journal of Computing Theories and Applications
Generative artificial intelligence tools have recently attracted a great deal of attention. This is because of their huge advantages, which include ease of usage, quick generation of answers to requests, and the human-like intelligence they possess. This paper presents a vivid comparative analysis of the top 9 generative artificial intelligence (AI) tools, namely ChatGPT, Perplexity AI, YouChat, ChatSonic, Google's Bard, Microsoft Bing Assistant, HuggingChat, Jasper AI, and Quora's Poe, paying attention to the Pros and Cons each of the AI tools presents. This comparative analysis shows that the generative AI tools have several Pros that outweigh the Cons. Further, we explore the transformative impact of generative AI in Natural Language Processing (NLP), focusing on its integration with search engines, privacy concerns, and ethical implications. A comparative analysis categorizes generative AI tools based on popularity and evaluates challenges in development, including data limitations and computational costs. The study highlights ethical considerations such as technology misuse and regulatory challenges. Additionally, we delved into AI Planning techniques in NLP, covering classical planning, probabilistic planning, hierarchical planning, temporal planning, knowledge-driven planning, and neural planning models. These planning approaches are vital in achieving specific goals in NLP tasks. In conclusion, we provide a concise overview of the current state of generative AI, including its challenges, ethical considerations, and potential applications, contributing to the academic discourse on human-computer interaction.
- Research Article
2
- 10.62049/jkncu.v5i1.177
- Dec 29, 2024
- Journal of the Kenya National Commission for UNESCO
The purpose of this study was to evaluate the effectiveness of Artificial Intelligence (AI) tools in teaching and learning in higher education institutions in Kenya, specifically focusing on Intelligent Tutoring Systems (ITS), Adaptive Learning Platforms, Virtual Learning Assistants (VLAs), Automated Grading Systems and Learning Analytics Systems (LAS), their accessibility use and its effectiveness in teaching and learning. The study employed a mixed-methods research design, combining both quantitative and qualitative approaches, to gather comprehensive data from faculty members, students, and administrators across 15 selected public and private universities and technical colleges in Kenya. The findings indicated that the accessibility of AI tools in institutions of higher learning in Kenya is significantly limited. A large majority of respondents expressed that AI tools are not readily available, highlighting disparities in access across different departments and projects within institutions. In terms of usage, the integration of AI tools into teaching and learning practices is still in its early stages in most institutions and where they are available they are not always well-integrated with existing curricula, leading to limited and uneven adoption across different disciplines. Despite these challenges, those who have begun using AI tools have reported benefits such as personalized learning, more efficient assessment processes, and enhanced feedback mechanisms, indicating that AI has the potential to transform educational practices if more effectively utilized. Findings further established a significant correlation between AI tools and effective teaching and learning in institutions of higher learning in Kenya (r = .781; p = .000). The study noted that while AI can significantly improve the educational experience, its current impact is constrained by several factors. Faculty members' unfamiliarity with AI, the lack of comprehensive training, and the inadequate integration of AI tools into the curriculum are major barriers to their effective use. However, where AI has been successfully implemented, it has contributed to better learning outcomes, higher student engagement, and more personalized feedback. The study recommended that institutions must invest in infrastructure, ongoing professional development, and curriculum integration, ensuring that AI tools are both accessible and effectively used to enhance teaching and learning outcomes.
- Research Article
- 10.47408/jldhe.vi32.1464
- Oct 31, 2024
- Journal of Learning Development in Higher Education
Using generative Artificial Intelligence (GenAI) tools has recently been deemed acceptable in some university policies, but how does this impact on students’ writing processes? How can we ensure that using GenAI in the writing process does not detract from learning outcomes? In our conference session, we reported on a collaborative project between the Academic Communication Centre at University College London (UCL) and 3 students (studying BSc Bioscience, BSc Linguistics, and MSc International Planning), which explored what was gained and what was lost when incorporating GenAI-driven tools in the reading-into-writing process. We asked students to complete a written assignment from their course using GenAI tools. The project consisted of 3 stages: 1) a pre-task reflection on writing processes and learning outcomes; 2) completion of an assignment using GenAI tools and ongoing diary entries; 3) interviews exploring the students' feelings towards GenAI tools, and their gains and losses experienced during the writing process.
- Research Article
- 10.24203/jcv4gx50
- Apr 4, 2025
- Asian Journal of Humanities and Social Studies
The integration of artificial intelligence (AI) in classrooms has transformed pedagogical and learning practices alike, allowing for greater student academic achievement. However, its ethical implications remain a critical concern, with educational institutions attempting to address (and often circumvent) the use of generative AI in assessment. By analysing existing literature and conducting semi-structured interviews and focus groups discussions with freshmen at a private university in Pakistan, this research qualitatively examines how generative AI tools impact student perceptions and positionality in debates about educational equity, learning outcomes, and ethical engagement with AI. To triangulate the findings, first year students were divided into experimental and control groups, with the former exposed to monthly AI training. Firstly, the findings showed that the use of generative AI, particularly ChatGPT, and subsequent discussions posit AI as both an obstacle and a path to educational equity. Additionally, AI use in universities is negotiated based on one’s positionality, with varying concerns for learning outcomes. There was a difference in trust in AI between the groups, with many vocalising and citing concerns such as algorithmic bias and an uneraseable digital footprint. Moreover, generative AI seems to be positioned differently in ethical understandings, as a personal and institutional ethical problem. This study provides a discussion about the appropriate practices for AI use in education, emphasising the need for clearer guidelines. It also highlights the fast-paced progress of generative AI and how debates and perceptions about these tools are still in their nascent phase.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.