Introduction to the Special Issue on Human-Centric Generative AI

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Generative AI increasingly reshapes how people engage with interactive systems. It now plays a vital role in designing, studying, and refining human-centered methods that let individuals interact and collaborate with AI, strengthening their agency and control. This special issue highlights the human role in Generative AI and seeks approaches that equip diverse stakeholders across socio-technical contexts to understand, direct, and steer these systems while enabling responsible innovation. We publish in this special issue original research on new interaction techniques that integrate human input into Generative AI’s continual development, studies of interaction paradigms that support more effective human–AI collaboration, and work that deepens understanding of model capabilities. Thus, we aim to build a research community around Human-Centric GenAI that empowers people to actively shape systems in line with their values, needs, and expectations.

Similar Papers
  • Research Article
  • Cite Count Icon 2173
  • 10.1016/j.ijinfomgt.2023.102642
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
  • Mar 11, 2023
  • International Journal of Information Management
  • Yogesh K Dwivedi + 72 more

Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

  • Research Article
  • 10.65106/apubs.2025.2774
Postcards of practice
  • Nov 28, 2025
  • ASCILITE Publications
  • Michael Cowling + 4 more

The rapid rise of Generative AI (GenAI) tools is reshaping conversations about assessment and feedback in higher education. While much institutional attention focuses on detection, compliance, and academic integrity (Cotton et al., 2024), this presentation shifts the lens to educators and how they are actually using GenAI in assessment practice. We present findings from a grant-funded initiative at UNSW that explores educator-led innovation through a Postcards of Practice approach. The Postcards of Practice are one-page, practice-based narratives where educators document their use of GenAI tools. These postcards highlight applications including formative feedback generation, student prompting literacy, assessment redesign, and co-creation with AI. They reveal how educators are experimenting with GenAI to support student learning while navigating ethical concerns, transparency, and pedagogical alignment. Our study uses a qualitative interpretive methodology, combining thematic analysis of the postcards with follow-up interviews. The analysis draws on theoretical frameworks including feedback literacy (Carless & Boud, 2018), dialogic assessment (Nicol, 2010), and new paradigm feedback design (Winstone & Carless, 2020). We also apply institutional and national GenAI guidelines (Liu & Bridgeman, 2023; Perkins, 2023) to surface shared values such as authenticity, inclusivity, and responsible innovation that guide educators’ decisions. The aim of this study is to explore how educators are experimenting with GenAI in assessment and feedback, and to capture their emerging practices and reflections through the Postcards of Practice initiative. The central research question guiding this work is: How are educators integrating GenAI into assessment and feedback, and what opportunities, challenges, and support needs arise from these practices? This work advances Technology Enhanced Learning (TEL) by providing empirical insights into how GenAI is actually integrated at the coalface of teaching. Educators describe how GenAI supports more frequent, personalised feedback and builds student agency in learning. At the same time, they raise concerns about over-reliance, AI hallucination, and the need for clear pedagogical scaffolding. These reflections point to the need for professional development that is discipline-sensitive, responsive, and grounded in practice. The postcard approach also functions as a professional learning intervention. It prompts reflection, encourages cross-disciplinary dialogue, and helps build a local community of practice around GenAI use. Through this model, we demonstrate an innovative and scalable method of capturing and supporting TEL innovation in real time. The findings suggest GenAI is prompting a rethinking of assessment: from summative, compliance-driven models to more transparent, formative, and student-centred designs. Educators begin to embed feedback literacy, ethical AI use, and critical prompting into their teaching, with clear implications for program-level assessment and graduate capability development. To strengthen clarity, we propose a concise diagram mapping the emerging practices captured in the postcards against the theoretical frameworks of feedback literacy, dialogic assessment, and new paradigm feedback design. This visual representation illustrates how practical insights align with, extend, or challenge these frameworks, making the study’s contribution accessible across diverse tertiary contexts. This proposal offers exemplary innovation in TEL by foregrounding bottom-up, practice-led experimentation with GenAI. It is grounded in strong theoretical frameworks and applicable across diverse tertiary contexts. The Pecha Kucha format will present key insights through rich visual storytelling, including excerpts from the postcards themselves. We conclude by proposing future directions for research and institutional strategy, including how to embed GenAI into assessment ecosystems in ways that enhance learning, uphold integrity, and empower educators to lead digital transformation from within.

  • Research Article
  • Cite Count Icon 5
  • 10.1017/dsj.2025.2
How generative AI supports human in conceptual design
  • Jan 1, 2025
  • Design Science
  • Liuqing Chen + 5 more

Generative Artificial Intelligence (Generative AI) is a collection of AI technologies that can generate new information such as texts and images. With its strong capabilities, Generative AI has been actively studied in creative design processes. However, limited studies have explored the roles of humans and Generative AI in conceptual design processes, which leaves a gap for human–AI collaboration investigation. To address this gap, this study attempts to uncover the contributions of different Generative AI technologies in assisting humans in the conceptual design process. Novice designers were recruited to complete two design tasks in the condition of with or without the assistance of Generative AI. The results revealed that Generative AI primarily assists humans in the problem definition and idea generation stages, while the idea selection and evaluation stage remains predominantly human-led. Additionally, with the assistance of Generative AI, the idea selection and evaluation stages were further enhanced. Based on the findings, we discussed the role of Generative AI in human–AI collaboration and the implications for enhancing future conceptual design support with Generative AI’s assistance.

  • Research Article
  • 10.1007/s00146-025-02291-0
Sisters, not twins: exploring artistic control and anthropomorphism through composing with a bespoke generative AI
  • Mar 12, 2025
  • AI & SOCIETY
  • Alexis Weaver

Generative AI (GenAI) has the potential to affect artists’ control over their own music due to the illegal usage of copyrighted material for training. However, GenAI also creates exciting opportunities for artists to expand their material and working processes. Artists working with GenAI and documenting their outcomes can assist other artists as well as wider society in understanding how GenAI operates and can benefit human artistic output. This paper provides an autoethnographic case study into how a new GenAI tool influenced an established composing practice during the writing of the experimental musical work, Control Yourself (2023). The Koup Music prototype by Kopi Su Studio was trained on vocal inputs by the author and subsequently generated bespoke sonic material. While identifiably true to the author’s musical—and literal—voice, the outputs were novel and perceived as imbued with emotion, leading to subsequent anthropomorphising of the AI. Written by a former AI sceptic, this paper details how the emotive power of the AI’s non-verbal, human-like sounds informed the narrative and structure of the resulting work and imparted a sense of collaboration, rather than solo authorship. Furthermore, the influence of the AI was felt beyond its actual involvement, with the project taking on a more playful approach less centred on the artistic control of the human composer. Following these observations, this paper discusses how GenAI served as a tool for musical experimentation and exploring creative ‘blind spots.’ These insights are also contextualised by current discourse on the perception and use of GenAI in the arts, the role of artistic control in human–AI co-creation, and how anthropomorphism has manifested in past human–AI partnerships.

  • Research Article
  • 10.32628/cseit2410612455
Generative AI-Powered Document Processing at Scale with Fraud Detection for Large Financial Organizations
  • Oct 31, 2024
  • International Journal of Scientific Research in Computer Science, Engineering and Information Technology
  • Sachin Dixit

This research paper explores the transformative potential of generative AI in the context of document processing within large financial organizations, with a particular focus on fraud detection. As financial institutions increasingly rely on vast amounts of documentation for operations ranging from customer onboarding to compliance, the inefficiencies and limitations of traditional manual processing methods become glaringly apparent. These legacy systems are not only time-consuming and prone to human error but also struggle with scalability, a critical requirement in today’s fast-paced financial environment. Moreover, manual systems and traditional Optical Character Recognition (OCR) engines often lack the necessary accuracy and contextual understanding to reliably process complex financial documents and detect fraudulent activities. While OCR technology has automated certain aspects of document processing, its inherent limitations in accuracy, particularly in dealing with degraded documents or complex layouts, and its inability to interpret context, significantly impede its effectiveness in high-stakes financial applications. Furthermore, OCR’s limited capability in detecting subtle indicators of fraud leaves financial organizations vulnerable to increasingly sophisticated fraudulent schemes. Generative AI emerges as a revolutionary solution to these challenges by enhancing the accuracy, scalability, and security of document processing systems. Unlike traditional OCR, generative AI models are designed to understand and interpret the context of documents, thereby significantly improving the accuracy of text recognition, even in complex scenarios. These AI models, trained on vast datasets, are capable of processing large volumes of documents in parallel, making them ideally suited for the high-speed, high-volume environments characteristic of financial institutions. Additionally, generative AI incorporates advanced algorithms that enhance fraud detection capabilities by analyzing patterns, detecting anomalies, and cross-referencing data across multiple documents. This approach not only improves the detection of fraudulent activities but also reduces the likelihood of false positives, thereby enhancing the overall reliability of the system. The paper further delves into the practical applications of generative AI in various critical areas within financial organizations. Key applications include Know Your Customer (KYC) compliance, where AI streamlines the processing and verification of customer documents, thereby ensuring both compliance with regulatory requirements and the authenticity of the information provided. In loan processing, generative AI accelerates the analysis of loan applications, providing real-time risk assessments that enable faster decision-making. Additionally, the technology is applied in invoice and payment processing, where it automates and verifies transactions, reducing errors and ensuring the timely execution of financial operations. In the realm of contract analysis, generative AI facilitates the extraction and interpretation of key terms and clauses, enabling more effective contract negotiation and management. Beyond its practical applications, the paper also addresses the continuous learning capabilities of generative AI models, which allow them to evolve and adapt to new data and document types over time. This feature is particularly crucial in the financial sector, where the types of documents and the nature of fraudulent activities are continually changing. The continuous learning aspect of generative AI ensures that the systems remain up-to-date and effective, even as new challenges and document types emerge. The research also highlights the comparative analysis between traditional OCR-based systems and AI-powered systems, demonstrating the superior performance, efficiency, and scalability of the latter. Moreover, the paper discusses the challenges associated with the implementation of generative AI in financial document processing. These include technical challenges such as the integration of AI systems with existing IT infrastructure, as well as regulatory and compliance issues that arise when deploying AI technologies in the highly regulated financial sector. Despite these challenges, the paper argues that the long-term benefits of adopting generative AI, including improved accuracy, enhanced fraud detection, and greater operational efficiency, far outweigh the initial hurdles. The research also considers the future of generative AI in financial document processing, suggesting that as the technology continues to advance, its applications and benefits will expand even further. Future research opportunities are identified, particularly in the areas of improving the efficiency and scalability of AI models, enhancing their ability to handle increasingly complex document types, and developing more sophisticated fraud detection algorithms. The paper concludes with a discussion on the potential long-term impact of generative AI on the financial industry, arguing that it will play a crucial role in shaping the future of financial operations by providing more accurate, scalable, and secure document processing solutions. This paper makes a significant contribution to the existing body of knowledge on the application of AI in financial services, particularly in the area of document processing and fraud detection. By providing a detailed analysis of the challenges faced by financial organizations and demonstrating how generative AI can address these challenges, the research offers valuable insights for both academic researchers and practitioners in the field. The findings presented in this paper have important implications for the future of document processing in financial organizations, suggesting that the adoption of generative AI will be essential for maintaining operational efficiency, accuracy, and security in an increasingly complex and fast-paced financial environment. In summary, this research not only highlights the transformative potential of generative AI in financial document processing but also provides a roadmap for its successful implementation in large financial organizations, with a particular emphasis on enhancing fraud detection capabilities.

  • Research Article
  • Cite Count Icon 99
  • 10.9781/ijimai.2023.07.006
What Do We Mean by GenAI? A Systematic Mapping of The Evolution, Trends, and Techniques Involved in Generative AI.
  • Dec 1, 2023
  • International Journal of Interactive Multimedia and Artificial Intelligence
  • Francisco José García Peñalvo + 1 more

Artificial Intelligence has become a focal point of interest across various sectors due to its ability to generate creative and realistic outputs. A specific subset, generative artificial intelligence, has seen significant growth, particularly in late 2022. Tools like ChatGPT, Dall-E, or Midjourney have democratized access to Large Language Models, enabling the creation of human-like content. However, the concept 'Generative Artificial Intelligence lacks a universally accepted definition, leading to potential misunderstandings. While a model that produces any output can be technically seen as generative, the Artificial Intelligent research community often reserves the term for complex models that generate high-quality, human-like material. This paper presents a literature mapping of AI-driven content generation, analyzing 631 solutions published over the last five years to better understand and characterize the Generative Artificial Intelligence landscape. Our findings suggest a dichotomy in the understanding and application of the term "Generative AI". While the broader public often interprets "Generative AI" as AI-driven creation of tangible content, the AI research community mainly discusses generative implementations with an emphasis on the models in use, without explicitly categorizing their work under the term "Generative AI".

  • Conference Article
  • 10.28945/5535
Preparing for the Future: An Initial Examination of Generative AI’s Integration into Unified Communications Through the Lens of Microsoft Copilot in Teams
  • Jan 1, 2025
  • Joy Fluker + 2 more

Aim/Purpose This study explores how generative AI is being integrated into unified communications (UC) platforms, focusing specifically on Microsoft Copilot as implemented in Microsoft Teams. It explores how generative AI enhances UC functionalities, identifies key adoption challenges, and provides insights into implementation strategies. Unlike traditional technologies that followed a gradual adoption curve, Copilot’s integration into Teams has the potential to accelerate its adoption, necessitating organizations to be proactive in their planning for its use. Background UC platforms have transformed enterprise communication by integrating multiple tools into a single interface. The integration of generative AI into UC introduces automation of complex routine and time-intensive tasks, enhanced decision support, and workflow optimization. However, adoption dynamics, user experiences, and long-term organizational impacts remain underexplored. Methodology This study employs a meta-analytic approach, synthesizing findings from peer-reviewed articles, conference proceedings, and industry reports. The analysis categorizes user perceptions of AI usefulness, key adoption barriers, and best practices for integration. Contribution This study evaluates the emerging literature on generative AI in UC platforms, focusing on initial user impressions and adoption challenges. Given the technology’s early stage, the findings provide preliminary insights to help organizations plan for effective AI integration in UC environments. Findings The findings indicate that generative AI in UC platforms enhances productivity, streamlines workflows, and improves decision support through features such as meeting summarization, transcription, and AI-driven content generation. However, adoption challenges, including resistance to change, data privacy concerns, and integration complexities, remain key barriers. Recommendations for Practitioners Preliminary findings indicate that users recognize the value of UC platforms integrated with generative AI and anticipate increasing benefits over time. However, successful adoption requires strategic planning to address implementation challenges and ensure effective deployment. Recommendations for Researchers As AI technologies evolve, further research is needed to assess the long-term impact of generative AI in UC platforms on workplace efficiency, productivity gains, user adaptation, and organizational transformation. Comparative research across industries can provide domain-specific best practices, while investigations into human-AI collaboration should examine the balance between automation and human oversight to optimize AI’s role in workplace communication. Impact on Society The integration of generative AI in UC platforms has far-reaching implications for enterprise communication, workforce collaboration, and digital transformation. AI-driven automation is poised to enhance workplace efficiency, but responsible governance and deployment are crucial for ensuring fair and transparent adoption. Future Research Future research is needed to explore the evolving role of agentic AI and its impact on enterprise workflows and strategic decision-making. Studies should assess its role in reducing cognitive load and enhancing team coordination while also addressing adoption challenges such as ethics, automation reliability, and user trust in autonomous AI systems.

  • Research Article
  • 10.28945/5514
Preparing for the Future: An Initial Examination of Generative AI’s Integration into Unified Communications Through the Lens of Microsoft Copilot in Teams
  • Jan 1, 2025
  • Issues in Informing Science and Information Technology
  • Joy Fluker + 2 more

Aim/Purpose This study explores how generative AI is being integrated into unified communications (UC) platforms, focusing specifically on Microsoft Copilot as implemented in Microsoft Teams. It explores how generative AI enhances UC functionalities, identifies key adoption challenges, and provides insights into implementation strategies. Unlike traditional technologies that followed a gradual adoption curve, Copilot’s integration into Teams has the potential to accelerate its adoption, necessitating organizations to be proactive in their planning for its use. Background UC platforms have transformed enterprise communication by integrating multiple tools into a single interface. The integration of generative AI into UC introduces automation of complex routine and time-intensive tasks, enhanced decision support, and workflow optimization. However, adoption dynamics, user experiences, and long-term organizational impacts remain underexplored. Methodology This study employs a meta-analytic approach, synthesizing findings from peer-reviewed articles, conference proceedings, and industry reports. The analysis categorizes user perceptions of AI usefulness, key adoption barriers, and best practices for integration. Contribution This study evaluates the emerging literature on generative AI in UC platforms, focusing on initial user impressions and adoption challenges. Given the technology’s early stage, the findings provide preliminary insights to help organizations plan for effective AI integration in UC environments. Findings The findings indicate that generative AI in UC platforms enhances productivity, streamlines workflows, and improves decision support through features such as meeting summarization, transcription, and AI-driven content generation. However, adoption challenges, including resistance to change, data privacy concerns, and integration complexities, remain key barriers. Recommendations for Practitioners Preliminary findings indicate that users recognize the value of UC platforms integrated with generative AI and anticipate increasing benefits over time. However, successful adoption requires strategic planning to address implementation challenges and ensure effective deployment. Recommendations for Researchers As AI technologies evolve, further research is needed to assess the long-term impact of generative AI in UC platforms on workplace efficiency, productivity gains, user adaptation, and organizational transformation. Comparative research across industries can provide domain-specific best practices, while investigations into human-AI collaboration should examine the balance between automation and human oversight to optimize AI’s role in workplace communication. Impact on Society The integration of generative AI in UC platforms has far-reaching implications for enterprise communication, workforce collaboration, and digital transformation. AI-driven automation is poised to enhance workplace efficiency, but responsible governance and deployment are crucial for ensuring fair and transparent adoption. Future Research Future research is needed to explore the evolving role of agentic AI and its impact on enterprise workflows and strategic decision-making. Studies should assess its role in reducing cognitive load and enhancing team coordination while also addressing adoption challenges such as ethics, automation reliability, and user trust in autonomous AI systems.

  • Research Article
  • Cite Count Icon 2
  • 10.1111/bjet.13613
The role of critical thinking on undergraduates' reliance behaviours on generative AI in problem‐solving
  • Jul 29, 2025
  • British Journal of Educational Technology
  • Chenyu Hou + 2 more

There is a heightened concern over undergraduate students being over‐reliant on Generative AI and using it recklessly. Reliance behaviours describe the frequencies and ways that people use AI tools for tasks such as problem‐solving, influenced by individual factors such as trust and AI literacy. One way to conceptualise reliance is that reliance behaviours are affected by the extent to which learners consciously evaluate the relative performance of AI and humans, suggesting the potential impacts of critical thinking on reliance. This study, thus, empirically investigates the relationship between critical thinking and reliance behaviours. Critical thinking includes disposition and skills. However, limited empirical studies have investigated how critical thinking influences learners' reliance behaviours when solving problems with Generative AI. Hence, the current study conducted path analyses to investigate how critical thinking is associated with reliance behaviours and how it mediates the effect of individual factors on reliance behaviours. We collected 808 survey responses on critical thinking disposition and skills, reliance behaviours (a self‐developed and validated scale, including reflective use, cautious use, thoughtless use, and collaborative use), trust towards AI, and AI literacy from undergraduates after a problem‐solving task with Generative AI. The results indicate that (1) critical thinking is positively associated with the collaborative, reflective, and cautious use of Generative AI, suggesting that these three types of use of Generative AI could be considered desirable behaviours in human–AI problem‐solving; (2) trust positively predicts thoughtless use; (3) critical thinking can offset the influence of trust on collaborative, reflective and cautious use; and (4) critical thinking can amplify the influence of AI literacy on reflective, cautious and collaborative use. This study contributes new insights into understanding the role of critical thinking in fostering desirable reliance behaviours, including reflective, cautious and collaborative use, and provides implications for future interventions when applying Generative AI for problem‐solving. Practitioner notes What is already known about this topic? Generative AI tools can potentially enhance problem‐based learning (PBL) by supporting brainstorming and solution refinement. Reliance behaviours in human‐AI collaboration are influenced by factors such as trust in AI and AI literacy. Strategy‐graded reliance emphasizes the reasoning process leading to reliance behaviours, focusing on thoughtful engagement with AI tools, and this cognitive process can be captured by critical thinking. What this paper adds? Critical thinking is positively associated with the reflective, collaborative, and cautious use of Generative AI. Critical thinking mediates the effects of trust and AI literacy on reliance behaviours, amplifying reflective, cautious and collaborative use while mitigating the thoughtless use of Generative AI. The study introduces a nuanced understanding of reliance behaviours by applying a strategy‐graded framework, emphasising cognitive engagement rather than a purely outcome‐based understanding of reliance behaviours. Implications for practice and/or policy Educational interventions could consider critical thinking when integrating AI tools in problem‐solving contexts. Students' trust in AI needs to be balanced with critical thinking skills to reduce overreliance and enhance thoughtful engagement with AI tools.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.dib.2024.110332
The TrollLabs open hackathon dataset: Generative AI and large language models for prototyping in engineering design
  • Mar 16, 2024
  • Data in Brief
  • Daniel Nygård Ege + 6 more

The TrollLabs Open dataset includes comprehensive information that offers a comparison of design practices and outcomes between human participants and Generative AI during a hackathon event. The dataset was curated through the running of a prototyping hackathon designed to assess the abilities and performance of generative AI, specifically ChatGPT, in the early stages of engineering design. This assessment involved comparing ChatGPT's performance to that of experienced engineering students in a hackathon setting, where participants competed by making a prototype that fires a NERF dart as far as possible. In this setup, all ideas, concepts, strategies, and actions undertaken by the AI-controlled team were autonomously generated by the ChatGPT, without human intervention or guidance, but implemented by two participants. Five self-directed baseline teams competed against the AI team. The dataset comprises 116 prototype entries and 433 edges (connection) that enable comparative analysis of design practices and performance between the team instructed solely by generative AI and baseline teams of experienced engineering design students. Prototypes and their attribute data were captured using Pro2booth, an online prototype capture platform running on participants' phones and computers. The dataset includes a transcript of the conversation between ChatGPT and the team responsible for implementing its recommendations, featuring 97 exchanges of prompts and responses. It contains the initial prompt used to instruct the AI, the objective and rules of the hackathon and the objective performance of teams, showing the ChatGPT team finishing 2nd among six teams. To the authors' knowledge, the TrollLabs Open dataset is the first and only open resource that directly compares the performance of generative AI with human teams in an engineering design context. Thus, it is intended to be a valuable resource to design researchers, engineering and design students, educators, and industry professionals seeking to find strategies for implementing generative AI tools in their design processes. By offering a comprehensive data collection, the dataset enables external researchers to conduct in-depth analyses that could highlight the practical implications of integrating generative AI in design practices, possibly providing an overview of its limitations and presenting recommendations for improved integration in the design process.

  • Research Article
  • 10.1080/10447318.2025.2580550
Measuring Trust in Generative AI Chatbots: Russian Adaptation and Validation of the Human-AI Trust Measurement Instrument
  • Nov 7, 2025
  • International Journal of Human–Computer Interaction
  • Antonina Rafikova + 1 more

As generative AI (GenAI) systems become increasingly prevalent, understanding user trust remains crucial for their successful adoption. This study had two main objectives: (1) to assess the conceptual validity of an established Human–AI Trust scale in the context of GenAI chatbots, and (2) to adapt the instrument for Russian-speaking users. The adapted instrument was shown to effectively capture multiple dimensions of trust in GenAI—understandability, technical competence, reliability, helpfulness, personal attachment, user autonomy, faith, and institutional credibility—with excellent reliability and validity. Strong correlations with technology acceptance measures and weak associations with personality traits and generalized trust supported convergent and divergent validity, respectively. The successful replication of the original factor structure in a GenAI context underscores the theoretical continuity of multidimensional trust frameworks across different AI classes. This research contributes to the growing field of human–AI interaction by providing a validated tool for assessing trust in GenAI systems.

  • Research Article
  • 10.1111/bjet.70031
Ink and algorithm: Exploring temporal dynamics in generative AI ‐assisted writing
  • Nov 26, 2025
  • British Journal of Educational Technology
  • Kaixun Yang + 6 more

The advent of Generative AI (GAI) has transformed writing, marking a shift towards GAI‐assisted writing in education. However, the dynamics of human–AI interaction in the writing process are not well‐understood, and thus, it remains largely unknown how human learning can be effectively supported with such technologies. This study addresses this gap by investigating how humans employ GAI during writing and examining the interplay between patterns of GAI usage and writing behaviours. To capture these patterns, we applied Dynamic Time Warping time‐series clustering to identify temporal trajectories of GAI use and employed Epistemic Network Analysis to examine how these trajectories relate to cognitive processes such as knowledge telling, knowledge transformation and cognitive presence. Our analysis revealed four distinct temporal patterns of GAI usage (ie, AI‐critical writers, AI‐dependent writers, AI‐independent writers and AI‐balanced writers), each associated with different cognitive engagement strategies. The findings suggest that some writers tend to rely excessively on GAI, which may limit opportunities for meaningful learning by reinforcing surface‐level strategies such as knowledge telling. These insights highlight the need for researchers and educators to design GAI‐based writing assistants that are sensitive to students' temporal dynamics and can scaffold more productive cognitive engagement. Practitioner notes What is already known about this topic Researchers and educators can gain practical insights into achieving intelligence augmentation through critical engagement by studying effective user behaviours for enhanced human–AI partnership in writing. Generative AI‐assisted writing can be evaluated using an evidence‐centred assessment framework, which relates to writing cognitive processes. Current studies on Generative AI‐assisted writing usually classify writers according to their overall AI usage behaviours throughout the entire writing session. What this paper adds We propose using time‐series clustering to identify and analyse common temporal patterns in AI usage during generative AI‐assisted writing processes. We uncover the correlation between temporal patterns in AI usage and human writing behaviours, which reflect cognitive processes, through Epistemic Network Analysis. We identify four major distinct temporal patterns in AI utilization and highlight that each pattern is correlated with different cognitive processes. Implications for practice and/or policy Researchers and educators should be aware of the risks associated with students overly relying on GAI for writing tasks, as this dependency can hinder opportunities for meaningful learning. Researchers and educators should carefully consider the types of suggestions provided by GAI‐based writing tools when integrating them into educational contexts.

  • Research Article
  • 10.1136/bmjhci-2025-101640
Bridging generative AI and healthcare practice: insights from the GenAI Health Hackathon at Hospital Clínic de Barcelona.
  • Oct 15, 2025
  • BMJ health & care informatics
  • Santiago Frid + 17 more

To describe the implementation of a multidisciplinary, ethically grounded hackathon as a model to develop and evaluate generative AI (GenAI) solutions for real-world clinical challenges within a hospital setting. The GenAI Health Hackathon (GAHH) organised at Hospital Clínic de Barcelona included 13 challenges were selected via an internal call based on clinical impact, feasibility and data availability. Participants accessed anonymised real-world data through a secure cloud environment. Teams employed large language models and retrieval-augmented generation to build prototypes addressing tasks such as clinical text structuring, decision support and workflow automation. Human-in-the-loop validation, explainability and regulatory safeguards were emphasised. The hackathon yielded multiple AI prototypes tested on real data. Results varied: entity recognition reached 90.5% accuracy, summarisation >90% clinician concordance and nutritional models achieved F1 scores of 0.75-0.93. Lower scores (F1<0.52, Jaccard Index <0.4) were seen in complex reasoning or multilingual tasks. Bias was explored in 10 projects, with mitigations such as stratified sampling, prompt tuning, disclaimers and expert oversight. A transferable framework was proposed to replicate responsible GenAI hackathons in clinical contexts. Interdisciplinary collaboration and real-world testing proved essential for aligning GenAI with clinical needs. The hackathon revealed challenges in bias, evaluation and integration but offered a transferable framework for responsible innovation under General Data Protection Regulation and the European Union Artificial Intelligence Act. The GAHH demonstrated that GenAI can be safely and effectively applied in healthcare with rigorous governance and interdisciplinary collaboration, offering a scalable model for responsible AI innovation.

  • Research Article
  • Cite Count Icon 1
  • 10.3390/bdcc9030061
Transitioning from TinyML to Edge GenAI: A Review
  • Mar 6, 2025
  • Big Data and Cognitive Computing
  • Gloria Giorgetti + 1 more

Generative AI (GenAI) models are designed to produce realistic and natural data, such as images, audio, or written text. Due to their high computational and memory demands, these models traditionally run on powerful remote compute servers. However, there is growing interest in deploying GenAI models at the edge, on resource-constrained embedded devices. Since 2018, the TinyML community has proved that running fixed topology AI models on edge devices offers several benefits, including independence from internet connectivity, low-latency processing, and enhanced privacy. Nevertheless, deploying resource-consuming GenAI models on embedded devices is challenging since the latter have limited computational, memory, and energy resources. This review paper aims to evaluate the progresses made to date in the field of Edge GenAI, an emerging area of research within the broader domain of EdgeAI which focuses on bringing GenAI on edge devices. Papers released between 2022 and 2024 that address the design and deployment of GenAI models on embedded devices are identified and described. Additionally, their approaches and results are compared. This manuscript contributes to understand the ongoing transition from TinyML to Edge GenAI and provides valuable insights to the AI research community on this emerging, impactful, and quite under-explored field.

  • Research Article
  • Cite Count Icon 1
  • 10.1111/fare.13188
Emerging Ideas. A brief commentary on human–AI attachment and possible impacts on family dynamics
  • Apr 21, 2025
  • Family Relations
  • Brandon T Mcdaniel + 3 more

ABSTRACTObjectiveIn this brief commentary article, we outline an emerging idea that, as conversational artificial intelligence (CAI) becomes a part of an individual's environment and interacts with them, their attachment system may become activated, potentially leading to behaviors—such as seeking out the CAI to feel safe in times of stress—that have typically been reserved for human‐to‐human attachment relationships. We term this attachment‐like behavior, but future work must determine if these behaviors are driven by a human–AI attachment or something else entirely.BackgroundCAI is an emerging technical advancement that is the cornerstone of many everyday tools (e.g., smartphone apps, online chatbots, smart speakers). With the advancement in generative and conversational AI, device affordances and technical systems are increasingly complex. For example, generative AI has allowed for more personalization, human‐like dialogue and interaction, and the interpretation and generation of human emotions. Indeed, AI tools increasingly have the ability to mimic human caring—learning from past interactions with the individual and appearing to be emotionally available and comforting in times of need. Humans instinctually have attachment‐related needs for comfort and emotional security, and therefore, as individuals begin to feel their attachment‐related needs are met by CAI, they may begin to seek out the CAI as a source of safety or to comfort their distress. This leads to questions of whether human–AI attachment is truly possible and, if so, what this attachment might mean for family dynamics.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.