Assessing the Awareness, Utilization, Perceived Benefits, and Challenges of Generative Artificial Intelligence Tools in Academic Writing among Graduate Students

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The rapid evolution of Generative Artificial Intelligence (AI) is revolutionizing academic writing practices. This study investigates its impact on graduate students at a Philippine Catholic university. The research specifically assesses the awareness, utilization, and perceptions of Generative AI tools among 150 thesis and dissertation writers. The findings reveal a high degree of awareness and widespread utilization of these tools. ChatGPT is particularly favored for tasks such as proofreading, brainstorming, and research. The perceived benefits of these tools include enhanced efficiency and accessibility, streamlining various aspects of the writing process. The study also uncovers that social media and peer networks are the primary sources of information about these tools. However, challenges persist. Notable concerns include the potential erosion of critical thinking skills, the opacity of AI algorithms, and broader ethical considerations related to plagiarism and bias in AI-generated content. The study further establishes a positive correlation between awareness and utilization of these tools. This underscores the need for targeted educational interventions that promote responsible AI utilization. This research offers valuable insights that advance understanding of the evolving role of AI in education, particularly within the specific Philippine context. The findings support adopting a balanced strategy for integrating AI into academia. This approach aims to empower students to fully utilize the advantages of AI tools while concurrently critically assessing their outputs and strictly adhering to academic integrity standards. The results underscore the essential requirement for clear institutional policies, comprehensive training initiatives, and open discourse. These elements are necessary to guarantee that AI functions as a mechanism for enhancing academic exploration, rather than substituting for fundamental human skills and ethical judgment.

Similar Papers
  • Research Article
  • 10.56357/jt.v21i1.429
The Effectiveness of Artificial Intelligence and Deep Learning Tools in Enhancing Academic Journal Writing: A Mixed Methods Study of Arabic Language Education Students in Indonesia
  • Jun 25, 2025
  • TRANSFORMASI
  • Zulaikha Zulaikha + 3 more

This study investigates the effectiveness of Artificial Intelligence (AI) and Deep Learning (DL) tools in enhancing academic journal writing skills among students in the Arabic Language Education program at UIN Maulana Malik Ibrahim Malang. Utilizing a mixed-methods approach within a quasi-experimental design, the research involved 90 final-year students divided into experimental and control groups. The intervention group employed AI-based tools such as ChatGPT, Grammarly, and Quilbolt throughout the writing process, while the control group relied on conventional methods. Data were collected through pre-test and post-test assessments, reflective journals, and structured questionnaires. Quantitative results showed a statistically significant improvement in writing performance among students who used AI tools, with large effect sizes (Cohen’s d > 1.0). Qualitative findings revealed that students engaged critically with AI outputs, valued teacher feedback, and developed ethical awareness regarding authorship and originality. The integration of AI tools also increased student confidence, enhanced writing fluency, and promoted autonomous learning. However, limitations in semantic precision and rhetorical fit especially in theology-specific content necessitated human revision. These findings affirm the role of AI as a cognitive scaffold in academic writing and highlight the need for culturally responsive, ethically guided AI integration in language teacher education.Keywords: Artificial Intelligence, Academic Writing, Deep Learning, Arabic Education, Cognitive Scaffold

  • Research Article
  • 10.63720/jqz1pdhl
Artificial Intelligence in Academic Writing: Opportunities and Risks from Planning to Publication
  • Apr 26, 2025
  • Journal of the Best Available Evidence in Medicine
  • Aaron Williams + 2 more

With the recent prevalence and enhanced accessibility of Artificial Intelligence (AI) tools, the aim of this review is to assess both the benefits and risks of using AI tools in academic writing, thus, allowing authors to make more informed decisions with regard to AI usage based on the current literature and the best available evidence. Between February and April 2024, the authors conducted a narrative review of the academic literature using AI focused keyword searches of databases including PubMed. Risks and benefits of using AI in academic writing were identified and subcategorised into four stages: planning, execution of research, drafting of a manuscript, and publication. The literature suggests that AI tools, particularly large language models, provide several potential benefits at each stage of academic writing. This includes assistance in idea generation, data analysis, peer review and drafting, with the potential to significantly improve overall efficiency. Significant challenges were also identified, including bias, plagiarism risk, and misleading AI-generated content (often referred to as hallucinations). In conclusion, AI tools appear to present promising opportunities for improving academic writing and could potentially revolutionise the process in which academic research is conducted. Careful consideration of their limitations with legal and ethical implications is paramount—thus, the authors recommend that a collaborative effort led by the academic community is needed to establish best practice guidelines and regulatory frameworks for the responsible and effective implementation of AI tools in the process of scientific publications.

  • Research Article
  • Cite Count Icon 2
  • 10.53797/aspen.v4i2.6.2024
Artificial Intelligence in Academic Writing: A Literature Review
  • Nov 25, 2024
  • Asian Pendidikan
  • Hui Guo + 1 more

Artificial intelligence (AI) has emerged as a transformative technology in education. This review focused on the intersection of AI tools and academic writing, addressing challenges such as plagiarism, language barriers, and feedback processes. The problem statement revolved around the increasing integration of AI in academic contexts, which offered opportunities for improved student learning but raised concerns over ethical issues such as plagiarism and over-dependence on AI-generated content. The purpose of this review was to critically review highly cited studies on the use of AI in academic writing, identifying AI tools and key findings. Research questions guiding this review included: 1) Which highly cited studies related to AI and academic writing, published since 2020, were identified as relevant? 2) Which AI had been utilised for academic writing? and 3) What findings had been reported in these previous studies? Methodologically, the review employed keyword searches in Google and Scopus databases to identify highly cited, open-access articles published since 2020. This resulted in the selection of 11 studies that spanned various AI tools in academic writing. Findings indicated that ChatGPT was the most frequently used AI tool, employed for tasks such as academic text generation, plagiarism detection, and language learning support. The review also highlighted ethical concerns, particularly regarding plagiarism, content accuracy, and the risk of over-reliance on AI. The implications were both theoretical and practical. Theoretically, this review demonstrated AI’s expanding influence in educational theory, especially in scaffolding learning for non-native English speakers. Practically, AI tools offered personalised feedback and enhance writing outcomes, though educators must implement these tools responsibly to prevent over-reliance. In conclusion, while AI tools showed great promise in improving academic writing, future research should address ethical concerns, enhance the accuracy of AI-generated content, and develop frameworks that balance AI assistance with the promotion of critical thinking skills.

  • Research Article
  • Cite Count Icon 37
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Research Article
  • 10.47772/ijriss.2024.804216
Artificial Intelligence (AI) Usage and Its Influence to the Students’ Academic Writing: A Quantitative – Correlation Investigation
  • Jan 1, 2024
  • International Journal of Research and Innovation in Social Science
  • Sherly Mae L Dingal + 10 more

This study dealt with the artificial intelligence (AI) usage and its influence on students’ academic writing. The primary goal of the study was to determine the level of artificial intelligence (AI) usage and students’ academic writing in terms of their respective indicators, the significant difference between artificial intelligence (AI) usage and students’, and what domains of artificial intelligence (AI) substantially influenced students’ academic writing. Also, this study utilized a quantitative-correlational design with 335 participants among Junior High School and Senior High School students in Lorenzo S. Sarmiento Sr. National High School. The average weighted mean, Pearson R, and multiple regression analysis were the statistical tools used in this study. Along with this, the results showed a high level of artificial intelligence (AI) usage in terms of satisfaction, AI literacy, relevance of AI, and confidence. Likewise, results showed a high level of students’ academic writing in terms of usefulness, ease of use, and attitude towards usage. In addition, there was a high correlation, and a significant relationship between artificial intelligence (AI) usage and its influence on the students’ academic writing. Hence, this led to the rejection of the null hypothesis. The domains of artificial intelligence (AI) usage that influenced students’ academic writing are the relevance of AI, confidence, and AI literacy. While satisfaction, as a domain of artificial intelligence (AI) usage, had no significant influence on students’ academic writing. Thus, educational institutions could harness the positive influence of artificial intelligence (AI) on students’ academic writing, creating a more technologically advanced, collaborative, and effective learning environment while encouraging students to participate in AI-assisted activities contributing to better progress and improved self-regulated learning as they interacted with AI tools and their peers in an academic setting.

  • Research Article
  • 10.1155/nuf/7447348
Perceptions of Nursing Faculty on Utilizing AI Tools in Academic Writing and Publication Productivity: A Cross‐Sectional Study
  • Jan 1, 2025
  • Nursing Forum
  • Maitha Al Salti + 6 more

Background: Artificial intelligence (AI) tools have emerged as transformative assets in academic writing, offering functionalities that enhance productivity and quality. However, their adoption in nursing education and research remains underexplored. This study aimed to evaluate the utilization patterns, perceptions, and predictors of AI tool adoption among nursing faculty for academic writing and publication tasks in Oman.Methods: A cross‐sectional quantitative study was conducted among 134 nursing academics from three major institutions in Oman. Participants completed the AI in Academic Writing and Publication Questionnaire (AIWQ‐40), measuring perceptions of AI tools across various writing dimensions. Descriptive and inferential statistical methods, including multiple regression analyses, were used to identify predictors of AI adoption and academic productivity.Results: Regression analysis showed significant predictors of publication productivity, explaining 26.2% of the variance (R2 = 0.262, p < 0.001). Older age and higher AIWQ‐40 scores positively predicted productivity, while holding a Master’s degree was negatively associated. AIWQ‐40 scores were significantly influenced by frequent AI use and publication count.Conclusion: AI tools hold significant potential to enhance academic writing and research productivity in nursing education. However, addressing ethical concerns and providing targeted training are essential to maximize their impact. There is a need for institutional guidelines to support responsible and equitable AI use in academic settings.

  • Research Article
  • Cite Count Icon 4
  • 10.3389/feduc.2025.1596462
Balancing AI-assisted learning and traditional assessment: the FACT assessment in environmental data science education
  • Jun 13, 2025
  • Frontiers in Education
  • Ahmed S Elshall + 1 more

As artificial intelligence (AI) tools evolve, a growing challenge faced by educators is how to leverage the invaluable AI-assisted learning, while maintaining rigorous assessment. AI tools, such as ChatGPT and Jupyter AI coding assistant, enable students to tackle advanced tasks and real-world applications. However, they also risk overreliance, which can diminish cognitive and skill development, and complicate assessment design. To address these challenges, the Fundamental, Applied, Conceptual, critical Thinking (FACT) assessment was implemented in an Environmental Data Science course for upper-level undergraduate and graduate students from civil and environmental engineering, and Earth sciences. By balancing traditional and AI-based assessments, the FACT assessment includes: (1) Fundamental skills assessment (F) through assignments without AI assistance to build a strong coding foundation, (2) applied project assessment (A) through AI-assisted assignments and term projects to engage students in authentic tasks, (3) conceptual-understanding assessment (C) through a traditional paper-based exam to independently evaluate comprehension, and (4) critical-thinking assessment (T) through complex multi-step case study using AI, to assess critical problem-solving skills. Analysis of student performance shows that both AI tools and AI guidance improved student performance and allowed them to tackle complex tasks and real-world applications versus AI tools alone without guidance. Survey results show that many students found AI tools beneficial for problem solving, yet some students expressed concerns about overreliance. By integrating assessments with and without AI tools, FACT assessment promotes AI-assisted learning while maintaining rigorous academic assessment to prepare students for their future careers in the AI era.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 256
  • 10.1016/s2589-7500(21)00132-1
Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review
  • Aug 23, 2021
  • The Lancet Digital Health
  • Albert T Young + 3 more

Artificial intelligence (AI) promises to change health care, with some studies showing proof of concept of a provider-level performance in various medical specialties. However, there are many barriers to implementing AI, including patient acceptance and understanding of AI. Patients' attitudes toward AI are not well understood. We systematically reviewed the literature on patient and general public attitudes toward clinical AI (either hypothetical or realised), including quantitative, qualitative, and mixed methods original research articles. We searched biomedical and computational databases from Jan 1, 2000, to Sept 28, 2020, and screened 2590 articles, 23 of which met our inclusion criteria. Studies were heterogeneous regarding the study population, study design, and the field and type of AI under study. Six (26%) studies assessed currently available or soon-to-be available AI tools, whereas 17 (74%) assessed hypothetical or broadly defined AI. The quality of the methods of these studies was mixed, with a frequent issue of selection bias. Overall, patients and the general public conveyed positive attitudes toward AI but had many reservations and preferred human supervision. We summarise our findings in six themes: AI concept, AI acceptability, AI relationship with humans, AI development and implementation, AI strengths and benefits, and AI weaknesses and risks. We suggest guidance for future studies, with the goal of supporting the safe, equitable, and patient-centred implementation of clinical AI.

  • Research Article
  • Cite Count Icon 6
  • 10.1108/lhtn-08-2024-0131
Artificial intelligence (AI) tools for academic research
  • Sep 17, 2024
  • Library Hi Tech News
  • Adetoun A Oyelude

PurposeThe purpose of the paper is to explore the rapidly evolving landscape of artificial intelligence (AI) tools in academic research, highlighting their potential to transform various stages of the research process. AI tools are transforming academic research, offering numerous benefits and challenges.Design/methodology/approachAcademic research is undergoing a significant transformation with the emergence of (AI) tools. These tools have the potential to revolutionize various aspects of research, from literature review to writing and proofreading. An overview of AI applications in literature review, data analysis, writing and proofreading, discussing their benefits and limitations is given. A comprehensive review of existing literature on AI applications in academic research was conducted, focusing on tools and platforms used in various stages of the research process. AI was used in some of the searches for AI applications in use.FindingsThe analysis reveals that AI tools can enhance research efficiency, accuracy and quality, but also raise important ethical and methodological considerations. AI tools have the potential to significantly enhance academic research, but their adoption requires careful consideration of methodological and ethical implications. The integration of AI tools also raises questions about authorship, accountability and the role of human researchers. The authors conclude by outlining future directions for AI integration in academic research and emphasizing the need for responsible adoption.Originality/valueAs AI continues to evolve, it is essential for researchers, institutions and policymakers to address the ethical and methodological implications of AI adoption, ensuring responsible integration and harnessing the full potential of AI tools to advance academic research. This is the contribution of the paper to knowledge.

  • Research Article
  • Cite Count Icon 71
  • 10.1111/nin.12556
Will ChatGPT undermine ethical values in nursing education, research, and practice?
  • Apr 26, 2023
  • Nursing Inquiry
  • Abdul‐Fatawu Abdulai + 1 more

Will ChatGPT undermine ethical values in nursing education, research, and practice?

  • Research Article
  • Cite Count Icon 1
  • 10.1155/hbe2/9943540
Determinants of Postgraduate Students′ Use of Artificial Intelligence (AI) in Academic Writing in Ghana: A Structural Equation Modelling Analysis
  • Jan 1, 2025
  • Human Behavior and Emerging Technologies
  • Gifty Edna Anani + 2 more

Academic writing has always been an arduous task, especially for postgraduate students at most African universities. Nonetheless, the emergence of artificial intelligence (AI) tools appears to have relieved postgraduate students of such supposed academic stress. Despite the concerns about the potential threat of AI to academic integrity, reports have indicated that postgraduate students are developing an increasing appreciation for the use of AI‐powered tools in writing. This study, therefore, sought to uncover the potential determinants of postgraduate students′ use of AI tools in academic writing. A total of 339 postgraduate students from a Ghanaian higher educational institution participated in the study. Ajzen′s theory of planned behaviour was employed as a framework to investigate the determinants of AI use. The proposed hypotheses were all confirmed—that is, behavioural beliefs, control beliefs and normative beliefs were significant predictors of postgraduate students′ behavioural intention to use AI in academic writing. It was also revealed that postgraduate students′ behavioural intentions and their control beliefs had a significant direct effect on their actual use of AI in academic writing. The study contributes to global debates on AI in higher education by highlighting that postgraduate students′ readiness to adopt AI tools is shaped not only by individual attitudes but also by perceived academic norms and contextual constraints. These insights emphasise the need for policies and pedagogical frameworks that promote responsible, equitable and context‐sensitive AI integration in postgraduate education.

  • Research Article
  • Cite Count Icon 30
  • 10.1016/j.ejmp.2021.03.015
Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.
  • Mar 1, 2021
  • Physica Medica
  • Nico Buls + 4 more

Performance of an artificial intelligence tool with real-time clinical workflow integration - Detection of intracranial hemorrhage and pulmonary embolism.

  • Research Article
  • Cite Count Icon 2
  • 10.21900/j.alise.2024.1710
The AI-empowered Researcher: Using AI-based Tools for Success in Ph.D. Programs
  • Oct 16, 2024
  • Proceedings of the ALISE Annual Conference
  • Vanessa Kitzie + 5 more

Generative artificial intelligence (AI) changes the picture of graduate education by providing personalized learning, automated feedback, intelligent research assistants, and automated content creation (George, 2023). AI tools will support doctoral students in text generation, language translation, responding to academic queries, and data collection and analysis and encourage self-learning and thinking development (Rasul et al., 2023; Zou & Huang, 2023). They also would be helpful for doctoral students working as teaching assistants and aiding in daily problems (Can et al., 2023; Parker et al., 2024). However, the rise of AI tools also leads to considerations of academic integrity, over-reliance on AI, misinformation, and the potential biases embedded in algorithms (George, 2023; Rasul et al., 2023). Echoing the opportunities and challenges of AI applications in research and learning, the ALISE Doctoral Students SIG wants to encourage a discussion on how doctoral students can use AI tools to empower us in the Ph.D. journey. The panel invites a diverse group of doctoral students/candidates to share how AI tools can facilitate data collection and analysis and their critical understanding of AI systems. Manar Alsaid will talk about using AI and machine learning to detect complex misinformation on social media. The talk aims to enhance our understanding of misinformation and reduce its negative impacts. This presentation will provide valuable insights for research on misinformation and information literacy. Adam Eric Berkowitz will introduce the black-box tinkering method that experimentally discerns how AI systems operate. The method enhances the transparency of AI systems, challenging the technocratic paradigm. With three examples, Berkowitz encourages attendees to learn what black-box tinkering is, how to identify cases using it, and potential opportunities to incorporate it in research. Anisah Herdiyanti will share insights from a study comparing transcripts generated by Otter.ai and Zoom Meetings. The presentation will highlight both the benefits and challenges of AI-based notes and transcription software, including technical concerns and the convenience of automated result delivery. The audience will enhance their understanding of AI tools in qualitative data transcribing and the ethical considerations in the process. Rebecca Bryant Penrose will showcase the use of HeyGen, an AI-based video generator and translation tool, in an international interview project between students at California State University Bakersfield and a Ukrainian artist/author. The presentation will increase awareness of the potential use of AI-based video and help researchers overcome language barriers in data collection. The panel will last 90 minutes, including a 5-minute introduction and a 5-minute wrap-up. Each panelist will have 10 minutes to present their topics, followed by 5-minute Q&As. A 25-minute moderated roundtable discussion will follow the panelists’ presentations to explore the potential use of different AI tools in research, including ChatGPT and AI-powered article summarizers. The panel’s learning outcomes include (1) Identifying challenges and opportunities to incorporate AI tools in research and study and (2) Explaining how to interact with AI tools to improve efficiency in research. It also provides a platform for doctoral students to share their knowledge of how AI changes research approaches and networks with each other.

  • Research Article
  • Cite Count Icon 1
  • 10.34190/ecie.19.1.2906
Exploring How Education Can Leverage Artificial Intelligence for Social Good
  • Oct 8, 2024
  • European Conference on Innovation and Entrepreneurship
  • Marie Leddy + 1 more

Artificial intelligence (AI) is rapidly transforming society and industries, presenting both opportunities and ethical challenges. AI enables machines to perform tasks traditionally done by humans, such as natural language processing, pattern recognition, decision-making, and problem-solving (Brookings, 2023). In education, AI enhances teaching methodologies, student assessment, and administrative tasks through tools like intelligent tutoring systems, adaptive learning platforms, and educational chatbots. These tools offer customised learning experiences, immediate feedback, and data-driven insights. This research aims to investigate how AI can be leveraged within education to promote social good by identifying how familiar educators and students are with AI tools, identify how educators and students perceive the role of AI in education and what are the current applications of AI technologies in educational settings and how widely are they used. Finally, discuss the opportunities and ethical considerations of integrating AI in education. AI technologies can address critical social challenges such as inequality, accessibility, and personalised learning. According to Luckin et al. (2016), "AI can provide tailored educational experiences that adapt to individual learning needs, thus promoting equity in education." This exploratory research begins with an overview of AI's role and tools in education, followed by a discussion of the challenges, opportunities, and ethical considerations associated with AI integration. Understandings are drawn from educator’s response to a questionnaire and a focus group with first year and final-year third level students. This qualitative data, analysed using NVivo software, reveals key themes and significant findings on effectively utilising AI in education.

  • Research Article
  • 10.55606/jupensi.v5i3.6100
Artificial Intelligence (AI) Tools in Supporting Students Academic Writing Tasks: Benefits and Limitations
  • Oct 28, 2025
  • Jurnal Pendidikan dan Sastra Inggris
  • Fitra Ramadani + 3 more

This study aims to explore English Language Education Study Program (ELESP) students' perceptions of the use of artificial intelligence (AI) tools in supporting their academic writing processes and identify the benefits and limitations they encounter when using these tools in their writing process. Six sixth-semester students from the ELESP were purposively selected based on their prior engagement with AI tools such as ChatGPT, Grammarly, or QuillBot in academic writing tasks. Data were collected through semi-structured, in-depth interviews that allowed participants to express their experiences openly while giving the researcher flexibility to probe further when necessary. Thematic analysis was used to analyze the findings. The findings indicate that the majority of students perceive AI paraphrasing tools as highly useful in the writing process, both in the linguistic and affective dimensions. The majority of EFL students also perceive AI tools as providing substantial benefits in supporting academic writing. Given these insights, it is suggested that educators and institutions integrate AI tools into writing pedagogy while providing explicit guidance on ethical and critical use.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.