More Warmth and Less Competence? Navigating the Positive Outcomes of Kindchenschema Cuteness in AI Agents’ Service Failure
With AI agents increasingly deployed, their failures demand strategies to sustain user forgiveness. While post-failure remedies are well-studied, there is still limited literature on how kindchenschema cuteness facilitate user forgiveness to ensure an opportunity for system improvement and user maintenance. Grounded in evolutionary psychology, this study examines how kindchenschema cuteness affects forgiveness toward failing AI agents. Using multi-method approach (behavioral experiments, eye-tracking, ECG), we reveal: (1) kindchenschema cuteness triggers dual forgiveness pathways: enhancing emotional empathy via perceived warmth while boosting cognitive tolerance via perceived competence; (2) novice personality framing strengthens this effect, particularly for high severity failures; and (3) physiological evidence confirms users' attentional bias toward kindchenschema features (prolonged fixation) and increased emotional arousal (higher ECG changes). These findings bridge evolutionary psychology with human-AI interaction by validating biologically rooted kindchenschema cute response mechanisms. For practitioners, we offer insights for designing failure resistant AI agents through strategic anthropomorphism and personality framing.
- Conference Article
11
- 10.1109/ichms56717.2022.9980625
- Nov 17, 2022
Search and Rescue Operations (SRO) are notoriously difficult as they typically involve human operations in high risk and low visibility environments. Often, stakeholders only have a general perception of the possible adversities in the situational environment. Ultimately, the success of these operations is a function of the manpower available, the terrain of the region, informed-decision-making based on terrain mapping and objective success in completion of search and rescue tasks with lower human casualties. A practical solution to this problem is to leverage the use of autonomous systems such as drones and rescue robots that can scout the terrain to gather information to augment the rescue team’s capabilities and mission success rates. In situations, such as a combat search and rescue mission, the mission might call for a cooperative effort between a Human and AI Agent, whereby both are able to share intelligence and coordinate in decision-making tasks. In this work, we present several novel contributions through a combat search and rescue simulation scenario that leverages a drone-based AI autonomous system for detection of targets-of-interest in the environment as a basis for human-AI teaming study. In this research, we examine various human factor metrics for different modes of interactions between the human agent and AI-driven drone/autonomous system agent to include implications on human mission completion with and without drone-based AI target detection-derived human situational awareness and time to mission completion. In addition, we introduce innovative AI techniques to model human agent (player) - AI agent (drone) exchanges through a hostage rescue scenario-based simulation and explore incentive strategies directed towards the human agent to encourage adoption of AI-based autonomous system as a cooperative intelligence asset and improve human-AI teaming performance. The unification of both AI techniques to model Human-AI interaction and incentive mechanisms to encourage usage of autonomous systems sets the foundation for assessing the efficacy of AI in Human Agent Teams.
- Conference Article
30
- 10.1145/3411764.3445256
- May 6, 2021
In Human-AI collaborative settings that are inherently interactive, direction of communication plays a role in how users perceive their AI partners. In an AI-driven cooperative game with partially observable information, players (be it the AI or the human player) require their actions to be interpreted accurately by the other player to yield a successful outcome. In this paper, we investigate social perceptions of AI agents with various directions of communication in a cooperative game setting. We measure subjective social perceptions (rapport, intelligence, and likeability) of participants towards their partners when participants believe they are playing with an AI or with a human and the nature of the communication (responsiveness and leading roles). We ran a large scale study on Mechanical Turk (n=199) of this collaborative game and find significant differences in gameplay outcome and social perception across different AI agents, different directions of communication and when the agent is perceived to be an AI/Human. We find that the bias against the AI that has been demonstrated in prior studies varies with the direction of the communication and with the AI agent.
- Research Article
1
- 10.2478/eoik-2025-0063
- Sep 1, 2025
- ECONOMICS
Purpose: This study examines the transformation of financial decision-making through the adoption of artificial intelligence, focusing on the shift from conventional AI systems to AI agents and agentic AI. It differentiates between automated analytical tools and autonomous, goal-oriented systems that increasingly assume decision-making authority within financial operations. Design/Methodology/Approach: Employing a qualitative multi-method approach—comprising semi-structured expert interviews, industry report synthesis, in-depth case studies, and a comparative performance evaluation—this research investigates AI agent implementation across SMEs, pharmaceutical analytics, and ERP-integrated corporate finance. Theoretically, it extends foundational models including the Efficient Market Hypothesis (EMH), Behavioral Finance, and the Adaptive Markets Hypothesis (AMH) by embedding the dynamic, learning-driven nature of AI agents into financial decision logic. Findings: The results indicate that AI agents introduce novel forms of informational asymmetry, enhance bias mitigation through adaptive modeling, and give rise to emergent decision structures via multi-agent interactions. These dynamics challenge core assumptions of market rationality and static efficiency. Practically, the study offers a structured framework for AI agent integration, emphasizing explainability, hybrid human-AI governance, and risk-specific safeguards to navigate ethical and regulatory constraints. The proposed conceptual taxonomy and cross-industry implementation roadmap reposition agentic AI as a strategic transformation—reshaping how financial institutions process data, execute judgments, and regulate algorithmic autonomy.
- Conference Article
11
- 10.18653/v1/2020.acl-demos.25
- Jan 1, 2020
We summarize our past five years of work on designing, building, and studying Sugilite, an interactive task learning agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations leveraging the graphical user interfaces (GUIs) of third-party mobile apps. Through its multi-modal and mixed-initiative approaches for Human- AI interaction, Sugilite made important contributions in improving the usability, applicability, generalizability, flexibility, robustness, and shareability of interactive task learning agents. Sugilite also represents a new human-AI interaction paradigm for interactive task learning, where it uses existing app GUIs as a medium for users to communicate their intents with an AI agent instead of the interfaces for users to interact with the underlying computing services. In this chapter, we describe the Sugilite system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.
- Conference Article
- 10.24963/kr.2024/73
- Nov 1, 2024
We present a novel framework designed to extend model reconciliation approaches, commonly used in human-aware planning, for enhanced human-AI interaction. By adopting a structured argumentation-based dialogue paradigm, our framework enables dialectical reconciliation to address knowledge discrepancies between an explainer (AI agent) and an explainee (human user), where the goal is for the explainee to understand the explainer's decision. We formally describe the operational semantics of our proposed framework, providing theoretical guarantees. We then evaluate the framework's efficacy ``in the wild'' via computational and human-subject experiments. Our findings suggest that our framework offers a promising direction for fostering effective human-AI interactions in domains where explainability is important.
- Research Article
1
- 10.1177/10711813251358779
- Jul 15, 2025
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Trust research in human-AI interaction over the past decades has identified various factors influencing trust dynamics within dyadic relationships between a single human and an AI agent. The current study addresses the gap of limited exploration in non-dyadic HAI scenarios by examining trust dynamics across two referents: AI and other humans. Using a custom-developed simulated mass evacuation testbed, we focus on a multi-operator-single-AI (MOSA) scenario, where multiple individuals need to evacuate to a safe area with the assistance of an AI guide. Participants can also report roadblocks to help others at a personal cost. We investigate trust dynamics in both the AI and other humans, specifically examining how trust changes after passing each waypoint. Our goal is to understand the effects of information transparency and individual compliance and reporting behaviors (at time t ) on trust dynamics (trust t+1 − trust t ). The study highlights that trust dynamics vary significantly depending on the referent.
- Research Article
- 10.31108/2.2025.2.35.15
- Jun 8, 2025
- Організаційна психологія Економічна психологія
Introduction. The article addresses the development of metacommunicative competence among international management professionals amid rapid AI integration. Based on analysis of contemporary human-AI interaction research, it establishes theoretical foundations of metacommunication in cognitive heterogeneous environments and substantiates the eco-facilitative approach as a methodological basis for developing this competence in hybrid management contexts. Aim. The aim is to conceptualize metacommunication as a key mechanism for interacting with AI agents within interface-based reality, and to justify the application of ECPF approach for cultivating management competencies in international contexts and enabling interactions with agents of different nature. Methods. Interdisciplinary theoretical analysis (psychology, management, post-humanist philosophy, communication studies) combined with logical-semantic modeling, comparative analysis of classical and contemporary approaches to metacommunication, interpretive reconstruction to adapt eco-facilitative principles, conceptual modeling of managerial metacommunicative competence integrating scholarly source analysis, practice synthesis, AI interaction interpretation, and insights from educational experiments. Results. Human-AI interaction reveals the need for inter-agent sensitivity as capacity to coordinate hybrid dialogue. A metacommunicative competence model encompassing cognitive, communicative, and ethical components was developed, metacommunication as a meaning-making mechanism for responsible multi-agent management was established and tools for ECPF sessions and AI partnership simulations were proposed. Conclusions. Metacommunication as inter-agent sensitivity enables adaptive AI interaction through meaning-generation. ECPF approach provides ecologically balanced framework for facilitative leadership in hybrid organizations. The development creates methodological foundation for educational programs and next-generation management tools in human-AI environments.
- Research Article
- 10.1080/10447318.2026.2618547
- Jan 29, 2026
- International Journal of Human–Computer Interaction
Older consumers in China increasingly engage in online shopping but often struggle with complex purchase process due to age-related cognitive declines and limited digital literacy. Yet, little is known about how AI recommendation strategy (selection vs. rejection) and information presentation modes jointly shape older consumers’ decision-making in the AI-based decision environments. Drawing on three experimental studies, the results consistently find that AI agents are more effective in enhancing seniors’ decision certainty and behavioral intentions when adopting rejection rather than selection strategies, particularly under structured (vs. unstructured) information presentations. Both static and dynamic presentation modes interact with recommendation strategies to shape evaluation processes, reduce cognitive load, and amplify the impact of AI-based recommendations on seniors’ decision outcomes. The findings contribute to research on human-AI interaction and offer practical guidance for designing age-friendly AI recommendation systems in digital marketplaces.
- Book Chapter
1
- 10.1007/978-3-030-82681-9_15
- Jan 1, 2021
We summarize our past five years of work on designing, building, and studying Sugilite, an interactive task learning agent that can learn new tasks and relevant associated concepts interactively from the user’s natural language instructions and demonstrations leveraging the graphical user interfaces (GUIs) of third-party mobile apps. Through its multi-modal and mixed-initiative approaches for Human-AI interaction, Sugilite made important contributions in improving the usability, applicability, generalizability, flexibility, robustness, and shareability of interactive task learning agents. Sugilite also represents a new human-AI interaction paradigm for interactive task learning, where it uses existing app GUIs as a medium for users to communicate their intents with an AI agent instead of the interfaces for users to interact with the underlying computing services. In this chapter, we describe the Sugilite system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.
- Research Article
- 10.38124/ijsrmt.v3i4.572
- Apr 28, 2024
- International Journal of Scientific Research and Modern Technology
AI agents and generative AI systems are increasingly becoming integral across sectors such as healthcare, finance, and creative industries. However, the rapid evolution of these systems has outpaced traditional evaluation methods, leaving gaps in evaluating them. This paper proposes a comprehensive Key Performance Indicator (KPI) framework spanning across five vital dimensions – Model Quality, System Performance, Business Impact, Human-AI Interaction, and Ethical and Environmental Considerations – to holistically evaluate these systems. Drawing insights from multiple studies, benchmarks like MLPerf, AI Index and standards like the EU AI Act [1] and NIST AI RMF, this framework blends established metrics like accuracy, latency and efficiency with novel metrics like “ethical drift” and “creative diversity” for tracking AI’s moral compass in real time. Evaluated on systems like GPT-4, DALL-E 3 and MidJourney, and validated through case studies such as Waymo [1] and Claude3, this framework addresses technical, operational, and ethical dimensions to enhance accountability and performance.
- Research Article
24
- 10.1016/j.chb.2020.106378
- Apr 13, 2020
- Computers in Human Behavior
Investigating the attentional bias and information processing mechanism of mobile phone addicts towards emotional information
- Research Article
1
- 10.1080/10447318.2024.2400398
- Sep 13, 2024
- International Journal of Human–Computer Interaction
With the proliferation of AI conversational agents, the design of preset prompts—text suggestions that guide user interactions—has become crucial for enhancing user experience. This study investigates the impact of different preset prompt language styles (social-oriented vs. task-oriented) on user satisfaction. Utilizing two empirical studies, we examined how these language styles influence user perceptions of an AI agent’s warmth and competence, and how these perceptions mediate overall satisfaction. In the first study, participants interacted with an AI agent using either social-oriented or task-oriented prompts under conditions of service success or failure. The results indicated that social-oriented prompts significantly enhance user satisfaction by increasing perceptions of warmth, but not competence. However, this positive effect diminishes in the event of service failure. In the second study, we explored the moderating effect of task urgency. Findings revealed that the positive impact of social-oriented prompts on satisfaction is significant in low-urgency tasks but not in high-urgency scenarios. These insights underscore the importance of prompt language style in AI interactions and provide practical implications for designing more effective AI communication strategies, especially in customer service contexts.
- Research Article
3
- 10.1177/10963480241296065
- Nov 11, 2024
- Journal of Hospitality & Tourism Research
AI agents, such as service robots, could encompass gender cues. However, little is known regarding whether and how customers apply gender stereotyping to service failures in gendered service tasks performed by robots. Drawing on gender stereotype theory, we investigate the joint effects of robot gender (feminine vs. masculine) and task type (female-dominated vs. male-dominated) on customer dissatisfaction following service failures. Study 1 reveals that feminine service robots are perceived as more communal but as equally agentic as their masculine counterparts. Study 2 demonstrates that feminine (vs. masculine) service robots generate lower customer dissatisfaction when failing a female-dominated task. However, this discrepancy diminishes when failures occur in a male-dominated task. Perceived communion and tolerance serially mediate such robot gender effects. Our findings suggest that using feminine robots across all service categories may be a cost-effective strategy for hospitality organizations, eliminating the need to vary robot gender by task type.
- Research Article
- 10.12783/dtssehs/icesd2019/28211
- Feb 27, 2019
- DEStech Transactions on Social Science, Education and Human Science
Abstinent drug users were hypothesized to harbor attentional bias towards stimuli relevant to negative facial expressions. This study investigated the attentional bias hypothesis for abstinent methamphetamine users, as well as the effects of attentional bias modification on the attentional bias for facial expressions and its effects on relapse tendency in abstinent drug users. These possibilities were investigated by using the dot-probe paradigm and “find-the-smile†visual search paradigm in two different behavioral experiments. The results of Experiment 1 showed that abstinent methamphetamine users displayed significant attentional bias for the facial expression of sadness. The results of Experiment 2 showed that the visual search attentional bias training significantly increased the attentional bias for happy faces and decreased the attentional bias for sad faces in abstinent methamphetamine users. The research has also found that such a training program decreased the relapse tendency. These results indicate that the visual search attentional bias modification may be an effective behavioral intervention for methamphetamine users.
- Research Article
43
- 10.1108/ejm-12-2016-0887
- Jan 29, 2018
- European Journal of Marketing
PurposeOnline consumer reviews (OCRs) have emerged as a particularly important type of user-generated information about a brand because of their widespread adoption and influence on consumer decision-making. Much of the existing OCR research focuses on quantifiable OCR features such as star ratings and volume. More research that examines the influence of review elements, aside from numeric ratings, such as the verbatim text, particularly in services contexts is needed. The purpose of this research is to investigate the impact of service failures on consumer arousal and emotions.Design/methodology/approachThe authors present three behavioral experiments that manipulate service failure and linguistic elements of OCRs by using galvanic skin response, survey measures and automated facial expression analysis.FindingsNegative OCRs lead to the greatest levels of arousal when consumers read OCRs. Service failure severity impacts anger, and referential cohesion, an observable property of text that helps a reader better understand ideas in the text, negatively moderates the relationship between service failure severity and anger.Originality/valueThe authors are among the first to empirically test the effect of emotional contagion in a user-generated content context, demonstrating that it can occur when consumers read such content, even if they did not experience the events being described. The research uses a self-report and physiological measures to assess consumer perceptions, arousal and emotions related to service failures, increasing the robustness of the literature. These findings contribute to the marketing literature on OCRs in service failures, physiological measures of consumers’ emotions, the negativity bias and emotional contagion in a user-generated content context.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.