- New
- Research Article
- 10.1145/3779302
- Jan 19, 2026
- ACM Transactions on Human-Robot Interaction
- Rita Molle + 2 more
Rehabilitative therapies play a crucial role in upper limb motor recovery, as upper limbs are the most active parts in executing the activities of daily living. Because of a huge number of people with motor disorders and a shortage of therapists, the integration of data-driven AI methodologies and robots for rehabilitation could be helpful in creating personalized and challenging therapies, leading to a myriad of benefits for both patients and therapists. AI methods can be implemented in different functional modules of the robotic platform, such as user intention recognition, robot motion planning, robot interaction control, and system adaptation through different learning paradigms. This article presents a systematic literature review on the use of data-driven learning methods applied in upper limb robot-aided rehabilitation. The analysis is structured around the learning paradigms adopted, namely, supervised, unsupervised, and reinforcement learning, as well as the corresponding task types (e.g., classification, regression, and control tasks) and model types, distinguishing between machine learning and deep learning approaches. The review reveals that most studies employ supervised learning to address classification tasks, and that deep learning models are the most frequently adopted.
- New
- Research Article
- 10.1145/3779295
- Jan 19, 2026
- ACM Transactions on Human-Robot Interaction
- Sara Nielsen + 4 more
Collaborative robots (cobots) and AI technologies are increasingly adopted in industrial settings to enhance productivity and efficiency. While cobots equipped with AI capabilities can enable more collaborative work between people and machines, they also face worker acceptance challenges. Understanding future workers’ perceptions, attitudes, and experiences with cobots and conversational AIs can inform robot designers and developers to design systems that promote collaboration, trust, and acceptance. In this study, we gathered quantitative and qualitative data from 37 participants enrolled in a vocational training program for industrial factory workers, who interacted with an AI-empowered, voice-enabled cobot during a simulated smart-factory assembly task and visited an art exhibition featuring industrial robots and cobots. While these participants are not currently employed in factories, they are considered proxy users —individuals with relevant domain knowledge and training who represent future factory workers. The art exhibition functioned as a design probe to illicit discussion and prompt critical reflection about automation and the role of artificial emotions in HRI. The smart-factory task offered participants a concrete example of how AI-empowered virtual assistants might be combined with cobots on the factory floor. In contrast with some of the HRI literature, participants expressed a strong preference for robots without emotional displays and social behaviors, challenging the view that anthropomorphism and human-like emotions promote robot acceptance. Based on our study, we propose design recommendations for developing AI-empowered, voice-enabled cobots based on five themes generated from the qualitative data.
- New
- Research Article
- 10.1145/3777455
- Jan 19, 2026
- ACM Transactions on Human-Robot Interaction
- Thomas H Weisswange + 4 more
Many human activities are performed in groups—making decisions in workplace meetings, cooperating on a sports team, or meeting with friends for dinner. All these activities involve complex conditions and interaction processes that influence their outcomes in terms of performance, personal goals, and group objectives. As robots are increasingly being positioned within groups, improving these outcomes has emerged as an important application area in social robotics, particularly through robotic facilitation. Robot facilitators aim to elicit positive changes by deliberately influencing group processes. While research in this field has demonstrated that robots can effectively influence interpersonal dynamics, there remains a notable gap in consolidating these insights into a coherent understanding that can guide the design and development of better facilitators. We present a scoping review of literature targeting changes in interactions between multiple humans that are driven by intentional actions from robotic agents. To identify key considerations for the design of robot facilitators, we take inspiration from human group research theories to organize existing approaches. Our review includes 108 publications that meet our inclusion criteria, yielding 85 distinct application targets for group facilitation using robots. Based on the identified instances, we extract categories of possible application targets and a set of design concepts that can guide future work on robotic group facilitators.
- New
- Research Article
- 10.1145/3776540
- Jan 19, 2026
- ACM Transactions on Human-Robot Interaction
- Sahand Shaghaghi + 4 more
A better understanding of how humans perceive robot personality variables could enable the design of more socially acceptable robots. In this exploratory study, we examined whether manipulations of an iCub robot’s voice and movements affected human participants’ perceptions of the robot’s personality. We programmed the robot to behave in different ways during a teaching scenario in which it played either a teaching, learning, or collaborative role, shown in recorded videos of human–robot interactions. A total of 240 participants in an Amazon Mechanical Turk study watched these videos and completed a series of questionnaires assessing their perceptions of the robot. Participants perceived the iCub as more extroverted when it spoke faster, with a higher pitch, and performed larger-amplitude movements. It was determined that participants’ personality dimensions were more influential in their perceptions of the robot’s TIPI and RoSAS personality dimensions than the robot’s social role and personality manipulations. Participants’ self-rated extroversion, emotional stability, and conscientiousness repeatedly appeared as significant factors affecting their perceptions of the robot’s personality. Interestingly, we observed strong perceiver effects, whereby participants’ perceptions of the robot’s personality traits were correlated with their own self-rated personality traits.
- New
- Research Article
- 10.1145/3777552
- Jan 19, 2026
- ACM Transactions on Human-Robot Interaction
- Cynthia Matuszek + 7 more
The comparatively recent advent of Large Language Models (LLMs) has resulted in a wide array of new capabilities and components relevant to Human–Robot Interaction (HRI) researchers. LLMs are being applied to vision, manipulation, planning, reasoning, learning, and HRI problems, frequently as “Scarecrows,” in which LLMs serve as black box modules integrated into robot architectures for the purpose of quickly enabling full-pipeline solutions. However, despite this explosion of applications, general questions remain about the best ways to incorporate LLMs into robot architectures, appropriate safety and guardrail considerations, and, critically, how to report properly on HRI research that involves LLMs. In this article, we explore the question of reporting guidelines for HRI researchers who utilize Scarecrows in robot architectures. We identify five key stakeholder groups in the HRI research process, discuss what information each group needs from HRI researchers, and identify appropriate mechanisms for conveying that information from HRI researchers to stakeholders either directly or indirectly. We contribute a set of suggested guidelines regarding what information should be included when researchers disseminate information about HRI research that uses LLMs.
- New
- Research Article
- 10.1145/3772066
- Jan 19, 2026
- ACM Transactions on Human-Robot Interaction
- Laura Saad + 3 more
Scales are commonly employed in Human–Robot Interaction (HRI) research, yet due to its multidisciplinary nature, many in this community lack direct training in psychometrics. This poses challenges for appropriate scale selection, accurate assessments of reliability and validity, and use. We provide a tutorial to empower researchers without scale development expertise to assess scale quality efficiently. We detail a guideline that provides high-level questions and examples to help the reader make confident evaluations of existing scales in HRI. The guideline is then used to evaluate the Godspeed and Robotic Social Attributes Scale (RoSAS). RoSAS is found to be adequately validated, whereas Godspeed warrants further investigation before it should be used in HRI contexts. The article concludes by offering advice on the use of custom scales and provides references for further enhancing expertise in this domain.
- Research Article
- 10.1145/3786200
- Dec 22, 2025
- ACM Transactions on Human-Robot Interaction
- Stela H Seo + 3 more
We explored whether and how individuals attribute authority to a robot. On-line participants ( N = 362 Japanese adults) received short videos showing a social interaction between a group of three human actors and a lone agent. Crucially, in the videos, the group interacted with a robot or a fourth human actor. The group bowed deeply for the lone agent. After each video, participants were asked to say to what extent the lone agent can be described as a source of authority ( authority attribution ) and to what extent the group would comply with an order issued by the lone agent ( obedience expectation ). We found that participants perceived the robot as in charge and expected the group to comply with its orders, although participants attributed less authority to the robot than to the human actor and expected the group to obey the human actor more than the robot. As such, we conclude that bowing as a social cue for authority attribution applies to interactions between robots and humans. These findings can have important applications for designing authority- or leader-like robot figures.
- Research Article
- 10.1145/3785152
- Dec 19, 2025
- ACM Transactions on Human-Robot Interaction
- Azra Aryania + 3 more
The prevalence of social robots is increasing, with examples such as customer service robots in malls and airports. This trend highlights the importance of transparency, particularly in data-sharing interactions with social robots operating in public spaces, where users may be asked to provide personal information to receive personalized experiences. This paper investigates how design transparency influences user trust and data-sharing behavior in human-robot interactions. We conducted an experiment with 143 participants who interacted with the social robot ARI under two transparency conditions: low and high transparency. In the low transparency condition, participants were informed about the data being collected and could choose to save or delete it. In the high transparency condition, the robot additionally indicated the sensitivity level of each data item: low (e.g., scenario preference), medium (e.g., name and email), and high (e.g., religious beliefs), allowing participants to make more informed decisions. Participants were presented with two scenarios: exploring city events and discovering local attractions. They received personalized recommendations based on their preferences, with the option to provide personal data (name, phone number, email) for possible future communication. After the interaction, participants decided whether to save or delete the data they had shared. The results indicated that while transparency did not significantly affect trust in the robot, it influenced data-sharing behavior. In particular, participants in the high transparency condition demonstrated more cautious behavior, opting to save less data and delete more. Furthermore, the results showed that both sensitivity levels and transparency influenced the participants’ data-sharing choices. Low-level sensitivity data led to the highest rates of saving and the lowest rates of deleting, while medium-level sensitivity data showed the opposite pattern. These findings highlight the need to align data categorization with user perceptions to address data sharing concerns more effectively.
- Research Article
- 10.1145/3785150
- Dec 17, 2025
- ACM Transactions on Human-Robot Interaction
- Daniel B Shank + 6 more
How do robot designers anthropomorphize their own creations? Because robot designers have the ability to alter the robot, identify as its creator, and understand their robot’s internal makeup, their process of anthropomorphism and its outcomes may be different from that of the typical robot user. We investigate this research question in the domain of combat robots, where anthropomorphism is critical to decision-making, communication, and trust in high-stakes, high-emotion combat situations faced by robot-soldier teams. We conducted an in-depth case study of a university’s student-led combat robotics design team over the design, construction, testing, and competition phases for their competitive combat robot. Based on inductive computational and human coding of extensive field notes, supplemented with interviews and surveys, we found that these robot designers anthropomorphize for three purposes. First, they anthropomorphize the bot to manage impressions of it within their team and to outsiders like competitors, spectators, and sponsors, specifically presenting it as a warrior. Second, they anthropomorphize it like a child, a pet, or simply treat it as a non-anthropomorphic mechanical set of parts as a way to calibrate their relationship and attach with or detach from their own creation. Third, they anthropomorphize the bots to assign blame either blaming it, its parts, or others based on their expectations of whether it is performing based on how they designed it. We conclude with implications for anthropomorphism by robot designers and application to military robot design.
- Research Article
- 10.1145/3785149
- Dec 16, 2025
- ACM Transactions on Human-Robot Interaction
- Cobe Wilson + 3 more
Objective: The purpose of this work was to examine the relationship between self-concept and ingroup/outgroup categorization of robots. Background: Social psychological literature can improve Human-Robot Interaction (HRI) through investigations about cultural differences, intergroup dynamics, and more. Parallel to human-human interaction, people categorize robots as ingroup (“my group”) or outgroup (“not my group”) based on a myriad of variables. They favor ingroup robots by viewing them as more positive and humanlike versus outgroup robots or humans. Previous work has examined the effect of robot anthropomorphism (i.e., human-likeness) on this categorization process with diverse findings. Method: Examining the self-concept via the Two-baskets theory of self-cognitions, we compared the ingroup categorization of humans, machine-like robots, Medium Human-like- robots, and High Human-Like robots using a simple categorization task. Results: Results indicate that robots and human are categorized to the ingroup correlating with the uncanny valley effect, with Humans being most likely to the ingroup followed by Medium human-like, Machine-like, and High human-like. Conclusions: Self-concept may not be as important for categorization as other factors; however, important categorization differences exist following the trend of the uncanny valley. Application: Those who design and utilize robots should take categorization differences into consideration when designing robots for public interactions. Further, those who purchase robots for use should be careful to consider the implications of visual similarities to human beings to ensure optimal acceptance. OSF: https://osf.io/x9rqn/?view_only=fd0894404e304422a6c77ccffa013bcd Ingroup categorization of robots is affected by self-cognitions and level of anthropomorphism