Using Artificial Intelligence to Generate Visual Vignettes in Factorial Survey Experiments

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Factorial survey experiments are widely used in the social sciences to study decision-making and attitudes through controlled, experimentally manipulated scenarios – typically presented as text. While textual vignettes offer flexibility and ease of use, they often lack realism and may limit participant engagement. This article explores how generative artificial intelligence (AI) can be used to create customisable images for visual vignettes. It demonstrates techniques for producing and selectively editing images, highlighting their potential to address the demands of experimental social science research, while it also acknowledges key challenges, including ethical considerations, biases inherent in AI tools, and technical limitations. The article showcases potential applications of AI-generated images in social science research and draws on a pretest with human participants to present evidence on how AI-generated images are perceived and interpreted. By critically evaluating both opportunities and challenges, this article provides researchers with practical guidance on integrating AI-generated visuals into factorial survey experiments, enhancing methodological approaches in the social sciences.

Similar Papers
  • Research Article
  • Cite Count Icon 34
  • 10.5204/mcj.3004
ChatGPT Isn't Magic
  • Oct 2, 2023
  • M/C Journal
  • Tama Leaver + 1 more

Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the w

  • Discussion
  • Cite Count Icon 3
  • 10.1108/jices-10-2024-0145
Generative AI tools (ChatGPT*) in social science research
  • Jan 20, 2025
  • Journal of Information, Communication and Ethics in Society
  • Rigin Sebastian + 3 more

PurposeThis paper aims to critically examine the implications of using generative artificial intelligence (AI) models, such as ChatGPT and Bard, in social science research. It examines the doppelganger effect in AI-driven studies as well as cognitive dissonance brought on by the autonomy of these tools. The discussion also addresses the debate between quantitative and qualitative methods for evaluating AI-driven research, scrutinising existing guidelines for accountability and validity. In addition, the paper considers the potential for generative AI to dominate research, identifying “non-takeoverable” skills and ethical issues in AI-driven knowledge production.Design/methodology/approachThis work primarily focuses on research articles for conceptual clarity, while news media reports are used to illustrate current scenarios.FindingsThe doppelganger effect makes people worry about situations in which AI copies existing work so well that it becomes possible for people to give the wrong credit. This has led to a critical review of ways to make sure that the outputs of generative AI are real and original. Generative AI can enhance data collection and analysis, offering alternative approaches to traditional research methodologies. By leveraging the capabilities of generative AI, researchers can potentially uncover new insights and perspectives from their data.Originality/valueIt is crucial to acknowledge the ethical concerns associated with using generative AI in social science research. The deployment of such technology introduces the possibility of biases and other ethical challenges that may impact the cognitive abilities of human participants or researchers involved in the research process. The work makes an effort by encouraging ethical consideration and highlighting crucial human abilities that are still necessary, providing a novel viewpoint on the use of generative AI in research approaches.

  • Research Article
  • Cite Count Icon 8
  • 10.9734/ajrcos/2024/v17i7491
Impact of Generative AI in Academic Integrity and Learning Outcomes: A Case Study in the Upper East Region
  • Jul 30, 2024
  • Asian Journal of Research in Computer Science
  • Japheth Kodua Wiredu + 2 more

With the increasing use of Generative Artificial Intelligence (AI) tools like ChatGPT and Bard, universities face challenges in maintaining academic integrity. This research investigates the impact of these tools on learning outcomes (factual knowledge, comprehension, critical thinking) in selected universities of Ghana's Upper East Region during the 2023-2024 academic year. The study specifically analyzes changes in student comprehension and academic integrity concerns when using Generative AI for content generation, research assistance, and summarizing complex topics. A mixed-methods approach was employed, combining qualitative data from interviews and open-ended questions with quantitative analysis of survey data and academic records. The research focuses on three institutions: C. K. Tedam University of Technology and Applied Sciences, Bolgatanga Technical University, and Regentropfen University College. A purposive sampling technique recruited 150 participants (50 from each university) who had used Generative AI tools. Key findings show that 72% of students reported improved understanding of course material through Generative AI use, yet 75% cited academic integrity as a primary concern. Quantitative analysis revealed a weak to moderate positive correlation (r = 0.45) between AI tool usage and improved grades, with variations depending on the specific AI tasks performed. Qualitative data highlighted concerns about overreliance on AI and its impact on critical thinking skills. This research contributes to the ongoing debate on AI's role in education by providing valuable insights for educators and policymakers worldwide. The findings suggest that while AI tools can enhance comprehension, ethical considerations and potential drawbacks related to critical thinking require careful attention. The study concludes with recommendations for integrating AI literacy programs, developing ethical guidelines, and implementing advanced plagiarism detection systems to harness the benefits of Generative AI while mitigating risks to academic integrity. Although specific to the Upper East Region of Ghana, these insights may be applicable to other educational systems with similar characteristics.

  • Research Article
  • Cite Count Icon 74
  • 10.1038/embor.2009.80
Lab‐scale intervention
  • May 1, 2009
  • EMBO reports
  • Daan Schuurbiers + 1 more

From mobile phones and laptop computers to in vitro fertilization and social networks on the Internet, technological devices, products and services are increasingly shaping the lives of people around the world. The pervasiveness of technology and the underlying science that makes it possible has led to a certain ambivalence: most people trust that ‘science’ will eventually help them to live longer, healthier and happier lives. However, they also feel increasingly uncomfortable about certain new technologies, often those that challenge or improve on ‘nature’. Genetically modified crops, gene therapy, stem cell research, cloning, renewed interest in nuclear power: the list of controversial topics involving science and technology is growing steadily and debates on these topics regularly occupy centre stage in public and political arenas. > …the research process itself constitutes a largely overlooked opportunity for addressing social concerns Policy‐makers have responded by calling for increased attention to be paid to the ethical, legal and social aspects of scientific research and technological developments. In particular, new and emerging areas of research—such as genomics, synthetic biology and nanotechnology—have been accompanied by studies of their broader societal implications as well as public‐engagement efforts, in order to guide research and development in ways that respect societal concerns. Such attempts to shape technological trajectories have traditionally occurred both before scientific research, for example, through research policy, technology assessment or public participation, and afterwards, through regulations or market mechanisms. Although these stages are crucial points at which to intervene, the research process itself constitutes a largely overlooked opportunity for addressing social concerns. Indeed, if one acknowledges the central role that scientific research has in the innovation process, this is an area well worth examining. Shaping technological trajectories will, at some point, include shaping the very research processes that help to characterize them (Fisher et al , 2006). Social and …

  • Research Article
  • Cite Count Icon 25
  • 10.62411/jcta.9447
A Comparative Analysis of Generative Artificial Intelligence Tools for Natural Language Processing
  • Feb 26, 2024
  • Journal of Computing Theories and Applications
  • Aamo Iorliam + 1 more

Generative artificial intelligence tools have recently attracted a great deal of attention. This is because of their huge advantages, which include ease of usage, quick generation of answers to requests, and the human-like intelligence they possess. This paper presents a vivid comparative analysis of the top 9 generative artificial intelligence (AI) tools, namely ChatGPT, Perplexity AI, YouChat, ChatSonic, Google's Bard, Microsoft Bing Assistant, HuggingChat, Jasper AI, and Quora's Poe, paying attention to the Pros and Cons each of the AI tools presents. This comparative analysis shows that the generative AI tools have several Pros that outweigh the Cons. Further, we explore the transformative impact of generative AI in Natural Language Processing (NLP), focusing on its integration with search engines, privacy concerns, and ethical implications. A comparative analysis categorizes generative AI tools based on popularity and evaluates challenges in development, including data limitations and computational costs. The study highlights ethical considerations such as technology misuse and regulatory challenges. Additionally, we delved into AI Planning techniques in NLP, covering classical planning, probabilistic planning, hierarchical planning, temporal planning, knowledge-driven planning, and neural planning models. These planning approaches are vital in achieving specific goals in NLP tasks. In conclusion, we provide a concise overview of the current state of generative AI, including its challenges, ethical considerations, and potential applications, contributing to the academic discourse on human-computer interaction.

  • Research Article
  • 10.69554/ofrh4163
Exploring the impact of generative AI literacy on teaching practices and pedagogical alignment
  • Sep 1, 2025
  • Advances in Online Education: A Peer-Reviewed Journal
  • Gennadii Miroshnikov + 1 more

The embedding of generative AI (GenAI) tools in education has become a groundbreaking development, creating significant opportunities to enrich teaching practices and drive innovative learning design. This study investigates how educators evaluate the impact of these tools on their teaching, the extent to which their practices align with established pedagogical frameworks and how artificial intelligence (AI) literacy influences their adoption and use of such technologies. Employing a mixed-methods approach, the research analysed survey data from participants of a Generative AI in Education massive online open courses (MOOC). Quantitative findings reveal high levels of satisfaction with AI tools, with educators reporting improved engagement and efficiency. Qualitative insights highlight key benefits, such as support for higher-order thinking and personalised learning, alongside challenges related to time constraints, AI literacy gaps and resource limitations. This study employs theoretical frameworks, including Technological Pedagogical Content Knowledge (TPACK), Substitution, Augmentation, Modification and Redefinition (SAMR) and Bloom’s Taxonomy, to evaluate how educators integrate AI into their teaching. While many respondents reported achieving a balance between traditional and innovative pedagogies, fewer utilised AI tools for transformative practices. The findings underscore the need for targeted professional development, tailored resources and ongoing ethical considerations to maximise the benefits of AI in education. This research advances the discussion on AI-enhanced teaching by offering actionable insights for educators and institutions aiming to align AI tools with pedagogical goals. By addressing barriers and utilising the functionalities of GenAI, this study advocates for its role in redefining teaching and learning, setting the stage for more engaging, efficient and equitable educational experiences. This article is also included in The Business & Management Collection which can be accessed at https://hstalks.com/business/.

  • Research Article
  • 10.34190/ecie.19.1.2468
Exploring the potential of AI to increase productivity in small marketing teams
  • Sep 20, 2024
  • European Conference on Innovation and Entrepreneurship
  • Aniko Szenftner + 2 more

Marketing scientists as well as practitioners believe that artificial intelligence (AI) holds the promise of productivity gains for organizations. However, there has been little scientific research into these theories. This study investigates the role of AI in enhancing marketing productivity, deriving insights from a case study conducted with the marketing team of an industrial software start-up. Drawing upon Case Study Analysis by Yin (2018) and Participatory Action Research by Kemmis and McTaggart (2007), the study employs a combination of survey interviews, AI tool research and AI tool testings. Key findings indicate that productivity gains are more likely than productivity impairments with the use of marketing AI tools. This effect is even stronger when knowledge workers possess high levels of AI skills and utilize AI tools with suitable capabilities. Having closely analyzed six marketing disciplines, particularly SEO / content and design demonstrated significant productivity gains including generative AI (GAI) tools the team already subscribed to like ChatGPT 4 and Canva, but also new AI solutions. While an AI tool’s level of integration only showed a weak positive productivity impact, future studies are suggested to further investigate this variable by comparing the effects of less advanced but more accessible tools like generative AI versus highly advanced, but less accessible business AI. Having navigated the vast and dynamic landscape of AI tools, insights further emphasize the importance of AI experience sharing and informed decision-making, implying knowledge of own user rights and always staying updated on AI advancements. Zooming out from process level, the work's literature review further highlights the role of environmental and organizational AI enablers, like budget allocation, fostering AI trust and mindset, but also implementing AI routines and responsibilities. Overall, this research underscores the imperative for companies, especially startups and SMEs, to explore AI technology as a means to enhance productivity and gain a competitive edge.

  • Research Article
  • Cite Count Icon 13
  • 10.31703/gssr.2023(viii-ii).19
Ethics and Privacy in Irish Higher Education: A Comprehensive Study of Artificial Intelligence (AI) Tools Implementation at University of Limerick
  • Jun 30, 2023
  • Global Social Sciences Review
  • Muhammad Irfan + 2 more

This research paper presents an insightful investigation into the perceptions and ethical considerations of students regarding the use of Artificial Intelligence (AI) tools in academia, particularly focusing on the University of Limerick in Ireland. Herein, AI tools like OpenAI's ChatGPT have emerged as valuable assets in promoting interactive learning and enhancing student engagement. Thus, this research aimed to explore the privacy and ethical considerations students have regarding the use of AI tools in education. Using a quantitative methodological approach, the study solicited the attitudes, opinions, and patterns of students towards AI utilities. The study revealed intriguing perspectives on data privacy concerns associated with AI tools. Students from technology and science-focused schools displayed a higher degree of concern, suggesting their deeper understanding of potential privacy implications. Conversely, students from arts, humanities, and social sciences, and law politics & public administration displayed slightly lower levels of concern.

  • Research Article
  • Cite Count Icon 8
  • 10.1287/ijds.2023.0007
How Can IJDS Authors, Reviewers, and Editors Use (and Misuse) Generative AI?
  • Apr 1, 2023
  • INFORMS Journal on Data Science
  • Galit Shmueli + 7 more

How Can <i>IJDS</i> Authors, Reviewers, and Editors Use (and Misuse) Generative AI?

  • Research Article
  • Cite Count Icon 1
  • 10.1080/26408066.2025.2548853
Artificial Intelligence in Systematic Literature Reviews: Social Work Ethics, Application, and Feasibility
  • Aug 22, 2025
  • Journal of Evidence-Based Social Work
  • Robert Lucio + 4 more

Purpose This study explores the alignment between themes identified by Artificial Intelligence (AI)-powered tools and those from a traditional, manual scoping review, focusing on generative AI’s role in streamlining time-intensive research processes Materials and Methods Thematic findings from a human-driven scoping review on peer support specialists in medical settings for opioid use disorder (OUD) were compared with outputs from NotebookLM, UTVERSE, and Gemini. Fifteen peer-reviewed articles were uploaded to each AI tool, and a standardized prompt directed the generative AI to identify themes using only the provided articles, which were then compared to the human-coded findings. Results The AI models identified between 53% and 80% of the themes found in the original manual analysis. While AI tools identified novel themes that could broaden the scope of analysis, they also generated inaccurate or misleading themes and overlooked others entirely. Discussion The variability in generative AI performance highlights its potential and limitations in thematic analysis. AI identified additional themes and misinterpreted or missed others. Human expert review remains necessary to validate the accuracy and relevance of generative AI, while addressing ethical considerations in alignment with the values of the social work profession. Conclusion A hybrid approach that combines generative AI with expert review has the potential to support current manual research approaches and establish a robust methodology. Continued evaluation, addressing limitations, and establishing best practices for human-AI collaboration and transparent reporting are crucial for the social work research field.

  • Research Article
  • 10.20448/jeelr.v12i4.7862
Students’ behavioral intentions toward generative AI in education: Task-technology fit and moral obligations
  • Dec 12, 2025
  • Journal of Education and e-Learning Research
  • Yann-Jy Yang + 3 more

The rapid advancement of generative AI tools, such as ChatGPT, has sparked widespread debate over their impact on academic integrity and educational practices. As these tools become increasingly accessible to students, understanding the factors that influence their adoption in academic settings is essential. The current study explores the application of generative artificial intelligence (AI) tools by college students, such as ChatGPT and many others, for completing homework assignments. Drawing on the Task-Technology Fit (TTF) framework and the concept of moral obligation, this research aims to investigate the factors influencing students' behavioral intentions to use generative AI in academic contexts. Data were collected through an online survey of 136 Taiwanese college students. The results indicate that perceived technology characteristics and self-efficacy significantly enhance task-technology fit, positively affecting behavioral intention. Conversely, moral obligation shaped by perceived teacher attitudes negatively influences students' intention to use AI tools for coursework. The study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to test the hypotheses and explains a substantial proportion of the variance in behavioral intention. These findings provide theoretical insights into how technological and ethical considerations jointly influence AI adoption in education. The study also offers practical suggestions for educators and institutions aiming to guide the responsible use of generative AI in learning environments. This study contributes a novel framework for understanding responsible AI use in higher education.

  • Research Article
  • Cite Count Icon 2
  • 10.21900/j.alise.2024.1710
The AI-empowered Researcher: Using AI-based Tools for Success in Ph.D. Programs
  • Oct 16, 2024
  • Proceedings of the ALISE Annual Conference
  • Vanessa Kitzie + 5 more

Generative artificial intelligence (AI) changes the picture of graduate education by providing personalized learning, automated feedback, intelligent research assistants, and automated content creation (George, 2023). AI tools will support doctoral students in text generation, language translation, responding to academic queries, and data collection and analysis and encourage self-learning and thinking development (Rasul et al., 2023; Zou &amp; Huang, 2023). They also would be helpful for doctoral students working as teaching assistants and aiding in daily problems (Can et al., 2023; Parker et al., 2024). However, the rise of AI tools also leads to considerations of academic integrity, over-reliance on AI, misinformation, and the potential biases embedded in algorithms (George, 2023; Rasul et al., 2023). Echoing the opportunities and challenges of AI applications in research and learning, the ALISE Doctoral Students SIG wants to encourage a discussion on how doctoral students can use AI tools to empower us in the Ph.D. journey. The panel invites a diverse group of doctoral students/candidates to share how AI tools can facilitate data collection and analysis and their critical understanding of AI systems. Manar Alsaid will talk about using AI and machine learning to detect complex misinformation on social media. The talk aims to enhance our understanding of misinformation and reduce its negative impacts. This presentation will provide valuable insights for research on misinformation and information literacy. Adam Eric Berkowitz will introduce the black-box tinkering method that experimentally discerns how AI systems operate. The method enhances the transparency of AI systems, challenging the technocratic paradigm. With three examples, Berkowitz encourages attendees to learn what black-box tinkering is, how to identify cases using it, and potential opportunities to incorporate it in research. Anisah Herdiyanti will share insights from a study comparing transcripts generated by Otter.ai and Zoom Meetings. The presentation will highlight both the benefits and challenges of AI-based notes and transcription software, including technical concerns and the convenience of automated result delivery. The audience will enhance their understanding of AI tools in qualitative data transcribing and the ethical considerations in the process. Rebecca Bryant Penrose will showcase the use of HeyGen, an AI-based video generator and translation tool, in an international interview project between students at California State University Bakersfield and a Ukrainian artist/author. The presentation will increase awareness of the potential use of AI-based video and help researchers overcome language barriers in data collection. The panel will last 90 minutes, including a 5-minute introduction and a 5-minute wrap-up. Each panelist will have 10 minutes to present their topics, followed by 5-minute Q&amp;As. A 25-minute moderated roundtable discussion will follow the panelists’ presentations to explore the potential use of different AI tools in research, including ChatGPT and AI-powered article summarizers. The panel’s learning outcomes include (1) Identifying challenges and opportunities to incorporate AI tools in research and study and (2) Explaining how to interact with AI tools to improve efficiency in research. It also provides a platform for doctoral students to share their knowledge of how AI changes research approaches and networks with each other.

  • Research Article
  • 10.1002/cae.70110
Exploring the Effectiveness of Generative AI as a Learning Tool in Engineering Education: An Analysis of Student Experiences and Perceptions
  • Nov 1, 2025
  • Computer Applications in Engineering Education
  • Abdulaziz Saud Alkabaa + 1 more

Artificial Intelligence (AI) is increasingly adopted by educational institutions, particularly as a generative AI (GenAI) tool for e‐learning. This study explores the effectiveness of using GenAI with engineering students at a leading university in Saudi Arabia and the Middle East. It aims to assess GenAI's impact in the College of Engineering and examine gender‐based differences in how students utilize AI as a learning tool. The study also investigates how students from different engineering majors utilize AI in their learning. To achieve this objective, an online survey with 15 questions was distributed to 403 engineering students to analyze their perceptions of AI adoption in education. The study employs two non‐parametric rank‐based statistical tests: the Mann–Whitney test to analyze gender differences, and the Kruskal–Wallis test to examine how various engineering disciplines such as industrial, electrical, mechanical, civil, chemical, nuclear, and mining engineering influence GenAI adoption. The findings reveal significant differences between male and female students in their experiences with GenAI, particularly regarding inaccurate or misleading responses, accurate and reliable responses, and their opinions regarding the users from applied academic field toward GenAI adoption. The results also indicate notable differences among engineering majors in their proficiency with GenAI features, their experiences with hallucinated responses, their views on using GenAI in theoretical disciplines, and their trust in the accuracy of information provided by ChatGPT. These findings support educational decision‐makers in integrating AI as a learning technology for engineering students and in understanding student engagement with AI tools in education.

  • Research Article
  • Cite Count Icon 1
  • 10.46303/cuper.2025.2
Assessment Redefined: Educational Assessment Meets AI - ChatGPT Challenges
  • Jun 27, 2025
  • Current Perspectives in Educational Research
  • Kleopatra Nikolopoulou

Generative artificial intelligence (AI) becomes continually integrated in educational environments and, particularly, in higher education. The advanced and growing capabilities of AI tools like ChatGPT for educational assessment (e.g., text generation, problem-solving, assisting with essays and grading, personalized feedback, real-time evaluations) challenge traditional assessments. The purpose of this conceptual paper is to discuss alternative assessment practices-approaches that face the challenges posed by AI tools (like ChatGPT). Such approaches include oral exams, presentations, project-based, real-world, real-time in-class assessments, skills-based, collaborative problem-solving, and ethical assessments (these may reduce the risks of over-reliance on AI). Examples of alternative assessment approaches using ChatGPT as a supportive tool in specific subjects (language &amp; literature, sciences, mathematics, history &amp; social sciences, computer science &amp; engineering, business &amp; economics) are presented. AI tools might be considered as supplementary to assist/support assessments rather than as a replacement for educators. When redefining educational assessment, ethical considerations and academic integrity need to be addressed.

  • Research Article
  • 10.6087/kcse.352
Ethical guidelines for the use of generative artificial intelligence and artificial intelligence-assisted tools in scholarly publishing: a thematic analysis
  • Feb 5, 2025
  • Science Editing
  • Adéle Da Veiga

Purpose: This analysis aims to propose guidelines for artificial intelligence (AI) research ethics in scientific publications, intending to inform publishers and academic institutional policies in order to guide them toward a coherent and consistent approach to AI research ethics.Methods: A literature-based thematic analysis was conducted. The study reviewed the publication policies of the top 10 journal publishers addressing the use of AI in scholarly publications as of October 2024. Thematic analysis using Atlas.ti identified themes and subthemes across the documents, which were consolidated into proposed research ethics guidelines for using generative AI and AI-assisted tools in scholarly publications.Results: The analysis revealed inconsistencies among publishers’ policies on AI use in research and publications. AI-assisted tools for grammar and formatting are generally accepted, but positions vary regarding generative AI tools used in pre-writing and research methods. Key themes identified include author accountability, human oversight, recognized and unrecognized uses of AI tools, and the necessity for transparency in disclosing AI usage. All publishers agree that AI tools cannot be listed as authors. Concerns involve biases, quality and reliability issues, compliance with intellectual property rights, and limitations of AI detection tools.Conclusion: The article highlights the significant knowledge gap and inconsistencies in guidelines for AI use in scientific research. There is an urgent need for unified ethical standards, and guidelines are proposed for distinguishing between the accepted use of AI-assisted tools and the cautious use of generative AI tools.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.