Implicit behavioral cues for enhancing trust and comfort in robot social navigation
Robots navigating public spaces must move safely and intuitively. A key challenge in social navigation is conveying navigational intent that aligns with human expectations and situational dynamics. This study presents a real-world, within-subject comparison of five signaling strategies – no cue, sudden stop, speed reduction, trajectory shaping, and verbal announcement – to examine how different strategies influence pedestrian comfort, trust, predictability, clarity, and proxemics. Mixed subjective ratings and video-coded behavior from 15 participants across repeated hallway encounters show that trajectory-based signaling produced the smoothest and most confident responses, improving comfort, trust, and predictability by up to 40% and proxemic respect by nearly 90% over the no-cue baseline. It also significantly outperformed the verbal cue in comfort, trust, and proxemic respect and achieved the highest mean ratings across all five measures. These findings highlight lightweight motion shaping as a practical strategy for socially appropriate robot navigation.
- Research Article
15
- 10.1145/3700599
- Feb 20, 2025
- ACM Transactions on Human-Robot Interaction
A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation . While the field of social navigation has advanced tremendously in recent years, the fair evaluation of algorithms that tackle social navigation remains hard because it involves not just robotic agents moving in static environments but also dynamic human agents and their perceptions of the appropriateness of robot behavior. In contrast, clear, repeatable, and accessible benchmarks have accelerated progress in fields like computer vision, natural language processing and traditional robot navigation by enabling researchers to fairly compare algorithms, revealing limitations of existing solutions and illuminating promising new directions. We believe the same approach can benefit social navigation. In this article, we pave the road toward common, widely accessible, and repeatable benchmarking criteria to evaluate social robot navigation. Our contributions include (a) a definition of a socially navigating robot as one that respects the principles of safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and responsiveness to context, (b) guidelines for the use of metrics, development of scenarios, benchmarks, datasets, and simulators to evaluate social navigation, and (c) a design of a social navigation metrics framework to make it easier to compare results from different simulators, robots, and datasets.
- Conference Article
1
- 10.1109/lars-sbr-wre48964.2019.00033
- Oct 1, 2019
With the growing presence of service robots in social environments, research into the development of social behavior in robots is becoming increasingly necessary. Service robotics and social navigation are research areas that have been growing in recent years. However, helping humans with everyday tasks as well as people who need special attention, the elderly and children are still challenging issues in both the domestic and commercial environments. In this paper we present the current problem of social navigation in service robots, a state-of-the-art review of social navigation models, and we propose a new social navigation model using context extraction from ontology. This is expected to improve both the naturalness and the sociability of the robot as well as the comfort of the human being. This project intends to use current detection and learning techniques and tools such as OpenPose, as well as semantic mapping and deep learning. Navigation experiments were performed in simulated environment and selection of a navigation method. Also started implementing people tracking using OpenPose with satisfactory preliminary results. It is hoped that with this project, we can collaborate with the field of social navigation research using ontology approach.
- Book Chapter
- 10.3233/atde250393
- Jun 16, 2025
With the continuous development of robotics and computer vision technology, mobile robots have been widely applied in various fields. In this process, social navigation for robots have attracted considerable attention due to its human-robot interactivity. Social navigation allows robots to follow human social interaction norms and ensure the movement safety and social acceptability of navigation behaviors. This paper focuses on a review of social navigation methods for mobile robots. First, the development process of mobile robot social navigation is outlined. Then, social navigation methods are classified according to the type of social behavior perception. Also, this paper summarizes model-based social navigation methods and learning-based social navigation methods in detail. Finally, the challenges and future development trends of mobile robot social navigation are discussed.
- Research Article
28
- 10.1109/lra.2022.3196783
- Oct 1, 2022
- IEEE Robotics and Automation Letters
We present SEAN 2.0, an open-source system designed to advance social navigation via the training and benchmarking of navigation policies in varied social contexts. A key limitation of current social navigation research is that policies are often trained and evaluated considering only a few social contexts, which are fragmented across prior work. Inspired by work in psychology, we describe navigation context based on social situations, which encompass the robot task and environmental factors, and propose logic-based classifiers for five common examples. SEAN 2.0 allows a robot to experience these social situations via different methods for specifying and generating pedestrian motion, including a novel Behavior Graph method. Our experiments show that when data collected using the Behavior Graph method is used to learn a robot navigation policy, that policy outperforms others trained using alternative methods for pedestrian control. Also, social situations were found to be useful for understanding performance across social contexts. Other components of SEAN 2.0 include vision and depth sensors, several physical environments, different means of specifying robot tasks, and a range of evaluation metrics for social robot navigation. User feedback for SEAN 2.0 indicated that the system was “easier to navigate and more user friendly” than SEAN 1.0.
- Conference Article
1
- 10.1109/aciiw57231.2022.10086013
- Oct 18, 2022
Human-aware navigation and Social Navigation are growing fields of robotics, attempting to tackle challenging navigation problems in human environments. The physical tasks themselves are difficult to handle algorithmically due to their dynamic nature and the difficulty to predict human behaviour. The challenge of this work is to show that in addition to the practical aspects of navigation, the manner in which a robot navigates - analogous to speech and gesture prosody - conveys fine cues which are strongly perceived as expressive behaviours. In a prior work, primitive types of navigation profiles were defined, and combined with minimal cues of appearance to create an audio-visual corpus of robot motion. In this work, we show through online and in-person perception experiments that some of these profiles are significantly associated to impressionistic values related to robot intentions and personality. These socio-affective values have an influence on care according to prior works involving people considered as frail, in particular exploring the idea of frail robots for tender care. Initial results show in particular that using saccadic or smooth motion can impact people's perceptions of whether the robot inspires confidence or not, seems frail or sturdy, authoritative or polite, aggressive or gentle, confident or doubtful, and nice or disagreeable.
- Conference Article
14
- 10.1109/iros47612.2022.9982021
- Oct 23, 2022
Robots moving safely and in a socially compliant manner in dynamic human environments is an essential benchmark for long-term robot autonomy. However, it is not feasible to learn and benchmark social navigation behaviors entirely in the real world, as learning is data-intensive, and it is challenging to make safety guarantees during training. Therefore, simulation-based benchmarks that provide abstractions for social navigation are required. A framework for these benchmarks would need to support a wide variety of learning approaches, be extensible to the broad range of social navigation scenarios, and abstract away the perception problem to focus on social navigation explicitly. While there have been many proposed solutions, including high fidelity 3D simulators and grid world approximations, no existing solution satisfies all of the aforementioned properties for learning and evaluating social navigation behaviors. In this work, we propose SocialGym, a lightweight 2D simulation environment for robot social navigation designed with extensibility in mind, and a benchmark scenario built on SocialGym. Further, we present benchmark results that compare and contrast human-engineered and model-based learning approaches to a suite of off-the-shelf Learning from Demonstration (LfD) and Reinforcement Learning (RL) approaches applied to social robot navigation. These results demonstrate the data efficiency, task performance, social compliance, and environment transfer capabilities for each of the policies evaluated to provide a solid grounding for future social navigation research.
- Research Article
18
- 10.3390/s21217087
- Oct 26, 2021
- Sensors (Basel, Switzerland)
Social robot navigation in public spaces, buildings or private houses is a difficult problem that is not well solved due to environmental constraints (buildings, static objects etc.), pedestrians and other mobile vehicles. Moreover, robots have to move in a human-aware manner—that is, robots have to navigate in such a way that people feel safe and comfortable. In this work, we present two navigation tasks, social robot navigation and robot accompaniment, which combine machine learning techniques with the Social Force Model (SFM) allowing human-aware social navigation. The robots in both approaches use data from different sensors to capture the environment knowledge as well as information from pedestrian motion. The two navigation tasks make use of the SFM, which is a general framework in which human motion behaviors can be expressed through a set of functions depending on the pedestrians’ relative and absolute positions and velocities. Additionally, in both social navigation tasks, the robot’s motion behavior is learned using machine learning techniques: in the first case using supervised deep learning techniques and, in the second case, using Reinforcement Learning (RL). The machine learning techniques are combined with the SFM to create navigation models that behave in a social manner when the robot is navigating in an environment with pedestrians or accompanying a person. The validation of the systems was performed with a large set of simulations and real-life experiments with a new humanoid robot denominated IVO and with an aerial robot. The experiments show that the combination of SFM and machine learning can solve human-aware robot navigation in complex dynamic environments.
- Research Article
- 10.51551/verimlilik.1523828
- Apr 16, 2025
- Verimlilik Dergisi
Purpose: Ensuring both human safety and transportation efficiency simultaneously during the navigation of autonomous mobile robots (AMRs) in warehouse logistics is a challenging problem due to dynamic environments and diverse obstacles. In this study, a social navigation approach based on artificial intelligence was developed to optimize these two critical factors. Methodology: RGB images from an Intel_RealSense_D455 depth camera mounted on the PIXER AMR were utilized in a YOLOv8-based model to detect humans and reach trucks (RT). For human detection, the YOLOv8 model was trained with 4746 images and labels for 362 epochs, while RT detection used 4193 images and labels for 450 epochs. Each dataset was split into 60% training, 20% testing, and 20% validation subsets. The depth feature of the camera was used to measure object distances. Findings: Objects detected with at least 80% accuracy had their midpoints identified, and distances were calculated using the depth camera. For humans detected within 2 meters, the robot's max_speed was reduced to 80%. For RTs detected at 6 meters, a new path was planned. Originality: This study provides a novel integration of social navigation and deep learning to address the dual challenge of ensuring safety and efficiency in AMR navigation, contributing to advancements in warehouse logistics.
- Research Article
64
- 10.1109/lra.2022.3184025
- Oct 1, 2022
- IEEE Robotics and Automation Letters
Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a “socially compliant” manner in the presence of other intelligent agents such as humans. With the emergence of autonomously navigating mobile robots in human-populated environments (e.g., domestic service robots in homes and restaurants and food delivery robots on public sidewalks), incorporating socially compliant navigation behaviors on these robots becomes critical to ensuring safe and comfortable human-robot coexistence. To address this challenge, imitation learning is a promising framework, since it is easier for humans to demonstrate the task of social navigation rather than to formulate reward functions that accurately capture the complex multi-objective setting of social navigation. The use of imitation learning and inverse reinforcement learning to social navigation for mobile robots, however, is currently hindered by a lack of large-scale datasets that capture socially compliant robot navigation demonstrations in the wild. To fill this gap, we introduce Socially CompliAnt Navigation Dataset ( SCAND )–a large-scale, first-person-view dataset of socially compliant navigation demonstrations. Our dataset contains 8.7 hours, 138 trajectories, 25 miles of socially compliant, human tele-operated driving demonstrations that comprises multi-modal data streams including 3D lidar, joystick commands, odometry, visual and inertial information, collected on two morphologically different mobile robots–a Boston Dynamics Spot and a Clearpath Jackal–by four different human demonstrators in both indoor and outdoor environments. We additionally perform preliminary analysis and validation through real-world robot experiments and show that navigation policies learned by imitation learning on SCAND generate socially compliant behaviors.
- Conference Article
4
- 10.1109/inista49547.2020.9194617
- Aug 1, 2020
Social robot navigation aims to generate human-friendly paths in human-robot interactive environments. This paper focuses on maintaining humans' physical safety and mental comfort during robot navigation in an unknown dynamic environment. To achieve this goal, we use a variant of a pedestrian model that is particularly developed for low or average density environments. Design decisions on the representation of the obstacle and pedestrian are important for smooth motion planning. Limiting the local obstacles as a region centered at the robot would be taken into consideration has weaknesses in terms of time complexity because a much detailed map has a great number of cells to be evaluated. The study contributes to the theoretical field with extensions such as the development of the obstacle representation model which aims to overcome the computational cost of the current solutions for smooth motion planning which can be a bottleneck for the entire system. The proposed method is tested on a physical mobile robot in hallway scenario both in real-world environment and simulation, and its success is experimentally shown.
- Conference Article
7
- 10.1109/icarsc.2019.8733641
- Apr 1, 2019
Robot navigation in human-populated environments is a subject of great interest among the international scientific community. In order to be accepted in these scenarios, it is important for robots to navigate respecting social rules. Avoid getting too close to a person, not interrupting conversations or asking for permission or collaboration when it is required by social conventions, are some of the behaviours that robots must exhibit. This paper presents a social navigation system that integrates different software agents within a cognitive architecture for robots and describes, as the main contribution, the corpus that allows to establish dialogues between robots and humans in real situations to improve the human-aware navigation system. The corpus has been experimentally evaluated by the simulation of different daily situations, where robots need to plan interactions with real people. The results are analysed qualitatively, according to the behaviour expected by the robot in the interaction performed. The results show how the corpus presented in this paper improves the robot navigation, making it more socially accepted.
- Conference Article
26
- 10.1109/roman.2014.6926347
- Aug 1, 2014
Robot navigation in human environments is an active research area that poses serious challenges. Among them, social navigation and human-awareness has gain lot of attention in the last years due to its important role in human safety and robot acceptance. Learning has been proposed as a more principled way of estimating the insights of human social interactions. In this paper, inverse reinforcement learning is analyzed as a tool to transfer the typical human navigation behavior to the robot local navigation planner. Observations of real human motion interactions found in one publicly available datasets are employed to learn a cost function, which is then used to determine a navigation controller. The paper presents an analysis of the performance of the controller behavior in two different scenarios interacting with persons, and a comparison of this approach with a Proxemics-based method.
- Book Chapter
3
- 10.1007/978-3-319-27149-1_19
- Nov 28, 2015
Robot navigation in human environments is an active research area that poses serious challenges in both robot perception and actuation. Among them, social navigation and human-awareness have gained lot of attention in the last years due to its important role in human safety and robot acceptance. Several approaches have been proposed; learning by demonstrations stands as one of the most used approaches for estimating the insights of human social interactions. However, typically the features used to model the person-robot interaction are assumed to be given. It is very usual to consider general features like robot velocity, acceleration or distance to the persons, but there are not studies on the criteria used for such features selection.
- Conference Article
47
- 10.1109/icra48506.2021.9560893
- May 30, 2021
Navigating fluently around pedestrians is a necessary capability for mobile robots deployed in human environments, such as buildings and homes. While research on social navigation has focused mainly on the scalability with the number of pedestrians in open spaces, typical indoor environments present the additional challenge of constrained spaces such as corridors and doorways that limit maneuverability and influence patterns of pedestrian interaction. We present an approach based on reinforcement learning (RL) to learn policies capable of dynamic adaptation to the presence of moving pedestrians while navigating between desired locations in constrained environments. The policy network receives guidance from a motion planner that provides waypoints to follow a globally planned trajectory, whereas RL handles the local interactions. We explore a compositional principle for multi-layout training and find that policies trained in a small set of geometrically simple layouts successfully generalize to more complex unseen layouts that exhibit composition of the structural elements available during training. Going beyond walls-world like domains, we show transfer of the learned policy to unseen 3D reconstructions of two real environments. These results support the applicability of the compositional principle to navigation in real-world buildings and indicate promising usage of multi-agent simulation within reconstructed environments for tasks that involve interaction. https://ai.stanford.edu/∼cdarpino/socialnavconstrained/
- Book Chapter
7
- 10.1007/978-3-030-36150-1_53
- Nov 20, 2019
Path planning is one of the most widely studied problems in robot navigation. It deals with estimating an optimal set of waypoints from an initial to a target coordinate. New generations of assistive robots should be able to compute these paths considering not only obstacles but also social conventions. This ability is commonly referred to as social navigation. This paper describes a new socially-acceptable path-planning framework where robots avoid entering areas corresponding to the personal spaces of people, but most importantly, areas related to human-human and human-object interaction. To estimate the social cost of invading personal spaces we use the concept of proxemics. To model the social cost of invading areas where interaction is happening we include the concept of object interaction space. The framework uses Dijkstra’s algorithm on a uniform graph of free space where edges are weighed according to the social traversal cost of their outbound node. Experimental results demonstrate the validity of the proposal to plan socially-accepted paths.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.