Abstract

One of the fundamental skill supporting safe and comfortable interaction between humans is their capability to understand intuitively each other's actions and intentions. At the basis of this ability is a special-purpose visual processing that human brain has developed to comprehend human motion. Among the first building blocks enabling the bootstrapping of such visual processing is the ability to detect movements performed by biological agents in the scene, a skill mastered by human babies in the first days of their life. In this paper we present a computational model based on the assumption that such visual ability must be based on local low-level visual motion features which are independent of shape, such as the configuration of the body, and perspective. Moreover, we implement it on the humanoid robot iCub, embedding it into a software architecture that leverages the regularities of biological motion also to control robot attention and oculo-motor behaviors. In essence, we put forth a model in which the regularities of biological motion link perception and action enabling a robotic agent to follow a human-inspired sensory-motor behavior. We posit that this choice facilitates mutual understanding and goal prediction during collaboration, increasing the pleasantness and safety of the interaction.

Highlights

  • Robots are progressively entering our houses: robotic devices as vacuum cleaners, pool cleaners, and lawn mowers are becoming more and more commonly used and the growth of robotics in the consumer sector is expected to continuously increase in the near future.1 The fields of applications for robotics will influence domestic activities and entertainment, education, monitoring, security, and assistive living, leading robots to frequent interactions with untrained humans in unstructured environments

  • Interaction in its simplest form seems, constituted by a sensitivity to some properties of others’ motion and to their direction of attention. Drawing inspiration from these observations, we propose a video-based computational method for biological motion detection, which we implement on the humanoid robot iCub (Metta et al, 2010a), to guide robot attention toward potential interacting partners in the scene

  • We proceed considering conditions that vary with respect to the training set: we focus on movements included in the training set but characterized by different speeds or trajectory patterns (Test II); actions in critical situations of visibility, as in the presence of occlusions, limited spatial extent of the observed motion, and even when just the shadow is in the camera field of view (Test III); different human actions recorded with the robot (Test IV) and with a hand-held camera placed in front of the robot, to test the influence of the acquisition sensor and of the viewpoint (Test V)

Read more

Summary

INTRODUCTION

Robots are progressively entering our houses: robotic devices as vacuum cleaners, pool cleaners, and lawn mowers are becoming more and more commonly used and the growth of robotics in the consumer sector is expected to continuously increase in the near future. The fields of applications for robotics will influence domestic activities and entertainment, education, monitoring, security, and assistive living, leading robots to frequent interactions with untrained humans in unstructured environments. A key challenge in current robotics has become to maximize the naturalness of human–robot interaction (HRI), to foster a pleasant collaboration with potential non-expert users To this aim, a promising avenue seems to be endowing robots with a certain degree of social intelligence, to enable them to behave appropriately in human environments. We put forth a model in which the regularities of biological motion link perception and action enabling a robotic agent to follow a human-inspired sensory-motor behavior. This way, we address two fundamental components necessary to facilitate the understanding of robots by human users: 1.

RELATED WORKS
A TEMPORAL MULTI-RESOLUTION BIOLOGICAL MOTION DESCRIPTOR
Instantaneous Motion Representation
Multi-Resolution Motion
Biological Motion Representation and Classification
OFFLINE EXPERIMENTAL ANALYSIS
Training the Motion Classifier
Testing the Classifier
OpfFeatExtractor
Classifier
BioMerger
PROVISION
IkinGazeControl
THE METHOD AT WORK ON THE ROBOT
Experiments on Online Learning
Experiment on Integration with PROVISION and Gaze Control
FINAL DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call