Abstract

Estimating distances between people and robots plays a crucial role in understanding social Human–Robot Interaction (HRI) from an egocentric view. It is a key step if robots should engage in social interactions, and to collaborate with people as part of human–robot teams. For distance estimation between a person and a robot, different sensors can be employed, and the number of challenges to be addressed by the distance estimation methods rise with the simplicity of the technology of a sensor. In the case of estimating distances using individual images from a single camera in a egocentric position, it is often required that individuals in the scene are facing the camera, do not occlude each other, and are fairly visible so specific facial or body features can be identified. In this paper, we propose a novel method for estimating distances between a robot and people using single images from a single egocentric camera. The method is based on previously proven 2D pose estimation, which allows partial occlusions, cluttered background, and relatively low resolution. The method estimates distance with respect to the camera based on the Euclidean distance between ear and torso of people in the image plane. Ear and torso characteristic points has been selected based on their relatively high visibility regardless of a person orientation and a certain degree of uniformity with regard to the age and gender. Experimental validation demonstrates effectiveness of the proposed method.

Highlights

  • Spatial placement of actors plays a crucial role in Human–Human Interaction (HHI)

  • Humans tend to position themselves at different interpersonal distances and spatial configurations depending on the context

  • Work in Human–Robot Interaction (HRI) has focused on importing aspects from HHI to create conducive interactions, where robots should adhere to proxemics and F-formations [4]

Read more

Summary

Introduction

Spatial placement of actors plays a crucial role in Human–Human Interaction (HHI). Unrestricted by physical constraints or task at hand, it characterizes and influences social relationships between actors. Two widely known theories in social HHI are interpersonal distances (proxemics) [1] and. F-formation system [2,3] These theories show that the spatial relationships between humans depend on their interactions. Humans tend to position themselves at different interpersonal distances and spatial configurations depending on the context. Work in Human–Robot Interaction (HRI) has focused on importing aspects from HHI to create conducive interactions, where robots should adhere to proxemics and F-formations [4]. It can be non-trivial for an autonomous or semi-autonomous robot to respect spatial configurations [5]. The robot needs to estimate the distance between itself and persons in the scene using an egocentric perspective in order to determine the opportunities for social interaction, and on the other hand, the robot must be able to join and collaborate with humans

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call