Human–AI collaboration for marketing capabilities: a meta-analysis

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Human–AI collaboration for marketing capabilities: a meta-analysis

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.1057/s41599-025-05097-z
Examining human–AI collaboration in hybrid intelligence learning environments: insight from the Synergy Degree Model
  • Jun 14, 2025
  • Humanities and Social Sciences Communications
  • Xinmei Kong + 4 more

The integrating AI into teaching and learning has the potential to transform traditional classroom environments into hybrid intelligence learning environments, whereby human teachers and AI teachers (educational robots) work together synergistically to enhance students’ learning processes and outcomes. To understand and optimize the synergistic effect of human–AI collaboration in hybrid intelligence learning environments, this study proposes a human–AI synergy degree model (HAI-SDM). A case study was conducted to examine the synergy degree and order degree in human–AI collaboration, involving forty students and one teacher from a class in a junior high school. The results indicate that the order degree between human teacher and AI machines remains at a moderate level while undergoing dynamic changes. The synergy degree fluctuates between low and moderate, reflecting relatively orderly development among the three subsystems (collaboration subject subsystem, collaboration process subsystem and collaboration environment subsystem), but one subsystem may exhibit disordered behaviours in contrast to the others. These findings have implications for developing more effective human-AI classroom collaboration and promoting the effective integration of AI into teaching and learning.

  • Research Article
  • 10.37547/tajet/volume07issue03-05
Human-AI Collaboration in IT Systems Design: A Comprehensive Framework for Intelligent Co-Creation
  • Mar 5, 2025
  • The American Journal of Engineering and Technology
  • Md Mahbub Rabbani + 5 more

In recent years, Human AI Collaboration has become an exciting new approach to IT systems design that is designed to balance automation and human expertise. Specifically, this paper investigates a broad framework of smart scenario co-creation with IT systems in general, where human and AI work together in dynamically sharing IT tasks, AI provides decision tools for augmentation, and mutual performance is optimized by dynamically adjusting learning parameters. The research employs a mixed method, and the case studies together with the surveys and the quantitative data analysis are used to assess the existing collaboration models. We find that hybrid teams, consisting of both AI agents and human experts, increase productivity by up to 40% when executing iterative design processes. In addition, the study provides important insights regarding the critical success factors such as adaptive system interfaces, trust building mechanisms and the skill augmentation strategies. This information presents a path for overcoming ubiquitous challenge in utilizing collaborative frameworks, such as technological misalignment and user resistance. The proposed framework is intended to enable replication of such integration in the real time IT environment offering flexibility, scalability and long-term efficiency. Second, this research adds to the expanding repository of knowledge in terms of human centered AI development and offers IT leaders practical approaches to take advantage of human AI synergy for innovation and competitiveness.

  • Research Article
  • 10.47989/ir30iconf47146
Collaborative human-AI risk annotation: co-annotating online incivility with CHAIRA
  • Mar 11, 2025
  • Information Research an international electronic journal
  • Jinkyung Katie Park + 3 more

Introduction. Collaborative human-AI annotation is a promising approach for various tasks with large-scale and complex data. Tools and methods to support effective human-AI collaboration for data annotation are an important direction for research. In this paper, we present CHAIRA: a Collaborative Human-AI Risk Annotation tool that enables human and AI agents to collaboratively annotate online incivility. Method. We leveraged Large Language Models (LLMs) to facilitate the interaction between human and AI annotators and examine four different prompting strategies. The developed CHAIRA system combines multiple prompting approaches with human-AI collaboration for online incivility data annotation. Analysis. We evaluated CHAIRA on 457 user comments with ground truth labels based on the inter-rater agreement between human and AI coders. Results. We found that the most collaborative prompt supported a high level of agreement between a human agent and AI, comparable to that of two human coders. While the AI missed some implicit incivility that human coders easily identified, it also spotted politically nuanced incivility that human coders overlooked. Conclusions. Our study reveals the benefits and challenges of using AI agents for incivility annotation and provides design implications and best practices for human-AI collaboration in subjective data annotation.

  • Research Article
  • Cite Count Icon 38
  • 10.1108/imds-03-2022-0152
Beyond AI-powered context-aware services: the role of human–AI collaboration
  • Dec 9, 2022
  • Industrial Management & Data Systems
  • Na Jiang + 5 more

PurposeArtificial intelligence (AI) has gained significant momentum in recent years. Among AI-infused systems, one prominent application is context-aware systems. Although the fusion of AI and context awareness has given birth to personalized and timely AI-powered context-aware systems, several challenges still remain. Given the “black box” nature of AI, the authors propose that human–AI collaboration is essential for AI-powered context-aware services to eliminate uncertainty and evolve. To this end, this study aims to advance a research agenda for facilitators and outcomes of human–AI collaboration in AI-powered context-aware services.Design/methodology/approachSynthesizing the extant literature on AI and context awareness, the authors advance a theoretical framework that not only differentiates among the three phases of AI-powered context-aware services (i.e. context acquisition, context interpretation and context application) but also outlines plausible research directions for each stage.FindingsThe authors delve into the role of human–AI collaboration and derive future research questions from two directions, namely, the effects of AI-powered context-aware services design on human–AI collaboration and the impact of human–AI collaboration.Originality/valueThis study contributes to the extant literature by identifying knowledge gaps in human–AI collaboration for AI-powered context-aware services and putting forth research directions accordingly. In turn, their proposed framework yields actionable guidance for AI-powered context-aware service designers and practitioners.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 25
  • 10.3390/systems11050217
Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games
  • Apr 24, 2023
  • Systems
  • Keke Hou + 2 more

Human–AI collaboration has attracted interest from both scholars and practitioners. However, the relationships in human–AI teamwork have not been fully investigated. This study aims to research the influencing factors of trust in AI teammates and the intention to cooperate with AI teammates. We conducted an empirical study by developing a research model of human–AI collaboration. The model presents the influencing mechanisms of interactive characteristics (i.e., perceived anthropomorphism, perceived rapport, and perceived enjoyment), environmental characteristics (i.e., peer influence and facilitating conditions), and personal characteristics (i.e., self-efficacy) on trust in teammates and cooperative intention. A total of 423 valid surveys were collected to test the research model and hypothesized relationships. The results show that perceived rapport, perceived enjoyment, peer influence, facilitating conditions, and self-efficacy positively affect trust in AI teammates. Moreover, self-efficacy and trust positively relate to the intention to cooperate with AI teammates. This study contributes to the teamwork and human–AI collaboration literature by investigating different antecedents of the trust relationship and cooperative intention.

  • PDF Download Icon
  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-3-031-46452-2_23
Multi-Stakeholder Perspective on Human-AI Collaboration in Industry 5.0
  • Sep 28, 2023
  • Thomas Hoch + 8 more

AI has gained significant traction in manufacturing, offering tremendous potential for enhancing production efficiency, cost reduction, and safety improvements. Consequently, developing AI-based software platforms that facilitate collaboration between human operators and AI services is crucial. However, integrating the different stakeholder perspectives into a common framework is a complex process that requires careful consideration. Our research has focused on identifying the individual relevance of varying quality characteristics per stakeholder toward such a software platform. Therefore, this work proposes an overview on the vital success factors related to human-AI teaming that can be used to measure fulfillment.

  • Research Article
  • Cite Count Icon 9
  • 10.1177/02761467241290813
Commentary on “AI is Changing the World: For Better or for Worse?”
  • Oct 17, 2024
  • Journal of Macromarketing
  • Praveen K Kopalle + 2 more

This commentary explores three fundamental premises surrounding the human-AI partnership. First, a human-AI collaboration is perhaps superior to either working independently, as AI enhances human capabilities but requires oversight to ensure ethical and accurate outcomes. Second, AI's effectiveness is limited by the quality and biases of its training data, which underscores the need for diverse, unbiased datasets. Without proper data, AI could perpetuate flawed or biased decisions, impacting areas such as hiring, healthcare, and empathy-driven interactions. Finally, generative AI is prone to “hallucinations,” where it produces plausible yet incorrect outputs. These errors pose significant risks in high-stakes sectors like healthcare and security. As AI becomes more ingrained in society, these challenges raise ethical concerns around job displacement, loss of human autonomy, and biased decision-making. Here, we also examine the implications of AI hallucinations and model collapse, stressing the importance of continuous human intervention to mitigate AI-driven inaccuracies. Ultimately, a balanced partnership between human judgment and AI's scalability, along with rigorous oversight, is necessary to unlock AI's potential while safeguarding societal values.

  • Research Article
  • 10.1108/itp-06-2024-0808
Enablers and inhibitors of AI assimilation in hiring: mitigating the effects of inhibitors through human–AI collaboration
  • Mar 20, 2025
  • Information Technology & People
  • Maryam Hina + 2 more

Purpose Most prior studies have primarily investigated AI adoption, with less attention given to AI assimilation in human resource management (HRM). Additionally, prior studies often lack empirical verification of the extent to which human–AI collaboration might alleviate challenges and promote AI assimilation in the HRM context. Thus, this study aims to explore AI assimilation in recruitment with a balanced view that identifies both enabling and inhibiting factors while examining the role of human–AI collaboration in mitigating the effects of inhibiting factors. Design/methodology/approach We used a mixed-method approach. Using an open-ended survey questionnaire approach and collecting data from 26 HR professionals, we identified five factors, namely, AI competency, recruitment agility, AI opacity, AI empathy and human–AI collaboration, potentially impacting AI assimilation. Thereafter, drawing from the enabler–inhibitor perspective, we theorize that AI competency and recruitment agility are the enablers, whereas AI opacity and AI empathy are the inhibitors of an organization’s efforts to assimilate AI in recruitment practices. We tested our proposed model by collecting data from 309 HR professionals. Findings The findings showed that both enablers, AI competency and recruitment agility, significantly influence AI assimilation; however, both inhibitors, AI opacity and AI empathy, are non-significant for AI assimilation. While looking into the reasons for these non-significant effects, we observed that the interaction term between AI empathy and human–AI-collaboration as well as between AI opacity and human–AI-collaboration both had significant effects on AI assimilation. These interaction effects suggest that human–AI collaboration mitigates the constraining impact of both inhibitors. Originality/value Drawing from the enabler–inhibitor perspective and by empirically testing our proposed model, this paper significantly contributes to the IS literature. Our study not only identifies factors that promote and inhibit AI assimilation in the context of HRM practices but also reveals how human–AI collaboration may mitigate the effects of inhibitors. Our findings suggest that organizations should have a collaborative recruitment environment where AI handles repetitive tasks, and humans focus on roles requiring emotional intelligence. This approach enhances the integration of AI-powered tools, addresses AI assimilation inhibitors and optimizes recruitment effectiveness.

  • Research Article
  • 10.1609/aaaiss.v5i1.35551
Human AI Collaboration for Trust Management
  • May 28, 2025
  • Proceedings of the AAAI Symposium Series
  • Mito Akiyoshi

Trust is one of the principles that human-AI teams must attain for the fulfillment of their mission. Ex-plainable AI and the principle of computational re-liabilism provide AI-intrinsic solutions for trust management. When human-AI collaboration breaks down, human-AI teams turn to common sense and intuition to recover trust. In addition, research on earlier innovations has shown that institutional and organizational mechanisms such as citizen adviso-ry boards and standardization promote trust. This paper sketches a framework for deployable and actionable trust management mechanisms. To that end, it will: (1) Identify three dimensions of trust. (2) Examine the role of heterogeneous stakehold-ers in human-AI systems. (3) Address the links among interpersonal trust, institutional trust, and trust in algorithms. (4) Suggest that stakeholder heterogeneity is a multi-level and multi-faceted imperative for establishing trust in human-AI teams.

  • Research Article
  • 10.1080/13504851.2025.2586160
Human–AI collaboration in high-stakes decisions: a meta-analysis of healthcare and public sectors
  • Nov 10, 2025
  • Applied Economics Letters
  • Vu Minh Ngo

This meta-analysis of 146 experiments in the healthcare and public sectors examines human–AI synergy versus augmentation amid substantial heterogeneity. We find that AI augmentation reliably improves human performance (Hedges’ g = 0.622), whereas synergy effects are generally negative, with AI alone often outperforming human–AI teams (Hedges’ g = −0.380), although publication bias favours positive augmentation results. Additionally, task type, AI transparency, and user expertise significantly moderate outcomes. These results caution against assuming inherent benefits of human–AI collaboration and instead support selective automation of structured tasks with human oversight for ethically complex decisions, guiding policymakers and leaders in optimizing human–AI integration.

  • Research Article
  • Cite Count Icon 13
  • 10.1016/j.chb.2022.107606
Vero: An accessible method for studying human–AI teamwork
  • Dec 13, 2022
  • Computers in Human Behavior
  • Aaron Schecter + 9 more

Despite the recognized need to prepare for a future of human–AI collaboration, the technical skills necessary to develop and deploy AI systems are considerable, making such research difficult to perform without specialized knowledge. To make human–AI collaboration research more accessible, we developed a novel experimental method that combines a standard video conferencing platform, a set of animations, and Wizard of Oz methods to simulate a group interaction with an AI teammate. Through a case study, we demonstrate the flexibility and ease of deployment of this approach. We also provide evidence that the method creates a highly believable experience of interacting with an AI agent. By detailing this method, we hope that researchers regardless of background can replicate it to more easily answer questions that will inform the design and development of future human–AI collaboration technologies.

  • Research Article
  • Cite Count Icon 122
  • 10.1038/s41598-022-18751-2
Experimental evidence of effective human–AI collaboration in medical decision-making
  • Sep 2, 2022
  • Scientific reports
  • Carlo Reverberi + 31 more

Artificial Intelligence (ai) systems are precious support for decision-making, with many applications also in the medical domain. The interaction between mds and ai enjoys a renewed interest following the increased possibilities of deep learning devices. However, we still have limited evidence-based knowledge of the context, design, and psychological mechanisms that craft an optimal human–ai collaboration. In this multicentric study, 21 endoscopists reviewed 504 videos of lesions prospectively acquired from real colonoscopies. They were asked to provide an optical diagnosis with and without the assistance of an ai support system. Endoscopists were influenced by ai (textsc {or}=3.05), but not erratically: they followed the ai advice more when it was correct (textsc {or}=3.48) than incorrect (textsc {or}=1.85). Endoscopists achieved this outcome through a weighted integration of their and the ai opinions, considering the case-by-case estimations of the two reliabilities. This Bayesian-like rational behavior allowed the human–ai hybrid team to outperform both agents taken alone. We discuss the features of the human–ai interaction that determined this favorable outcome.

  • Research Article
  • Cite Count Icon 175
  • 10.1016/j.jbusres.2020.11.038
Cobots in knowledge work: Human – AI collaboration in managerial professions
  • Dec 17, 2020
  • Journal of Business Research
  • Konrad Sowa + 2 more

Cobots in knowledge work: Human – AI collaboration in managerial professions

  • Research Article
  • Cite Count Icon 14
  • 10.1016/j.chbah.2023.100015
Optimizing human-AI collaboration: Effects of motivation and accuracy information in AI-supported decision-making
  • Aug 1, 2023
  • Computers in Human Behavior: Artificial Humans
  • Simon Eisbach + 2 more

Optimizing human-AI collaboration: Effects of motivation and accuracy information in AI-supported decision-making

  • Research Article
  • 10.3126/nprcjmr.v2i7.80610
Enhancing Digital Transformation and Green HRM through Human-AI Collaboration: A Supply Chain-Inspired Framework for Institutional Quality Support in Community Colleges of Bagmati Province, Nepal
  • Jul 14, 2025
  • NPRC Journal of Multidisciplinary Research
  • Tara Prasad Gautam + 2 more

This study explores how human–AI collaboration moderates the relationship between digital transformation and Green Human Resource Management (Green HRM) within Nepal’s community colleges. Bridging digital and green HRM frameworks, it introduces a supply chain–inspired lens to examine service delivery and sustainability in public higher education. Guided by Resource-Based View (RBV) and Socio-Technical Systems Theory, the research applies a mixed-methods design combining Structural Equation Modeling (SEM) with fuzzy-set Qualitative Comparative Analysis (fsQCA) on data from 285 staff members across five Tribhuvan University–affiliated community colleges in Bagmati Province.SEM results confirm that digital transformation significantly enhances Green HRM (β = 0.48, p < 0.001), and this relationship is strengthened by effective human–AI collaboration (interaction β = 0.25, p < 0.001). fsQCA identifies two equifinal pathways to high Green HRM: (1) Tech + AI Synergy (high digitalization and high human–AI collaboration), and (2) Tech-Driven Path (high digitalization alone). These findings reveal that while AI augmentation enhances green outcomes, foundational digital infrastructure alone can also yield substantial sustainability gains. Theoretically, this is one of the first empirical studies to integrate digital transformation, Green HRM, and human–AI collaboration within a developing country education system, extending supply chain models from corporate to academic contexts. Practically, it provides actionable insights for Internal Quality Assurance Cells (IQACs) to align digital and green agendas and suggests that policy bodies like UGC and MoEST should embed AI-readiness and sustainability metrics into accreditation frameworks. The study underscores the importance of socio-technical alignment in enabling sustainable, tech-enabled institutional quality in Nepal’s higher education landscape.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon