Abstract

In order to meet the increasing demands of mobile service robot applications, a dedicated perception module is an essential requirement for the interaction with users in real-world scenarios. In particular, multi sensor fusion and human re-identification are recognized as active research fronts. Through this paper we contribute to the topic and present a modular detection and tracking system that models position and additional properties of persons in the surroundings of a mobile robot. The proposed system introduces a probability-based data association method that besides the position can incorporate face and color-based appearance features in order to realize a re-identification of persons when tracking gets interrupted. The system combines the results of various state-of-the-art image-based detection systems for person recognition, person identification and attribute estimation. This allows a stable estimate of a mobile robot’s user, even in complex, cluttered environments with long-lasting occlusions. In our benchmark, we introduce a new measure for tracking consistency and show the improvements when face and appearance-based re-identification are combined. The tracking system was applied in a real world application with a mobile rehabilitation assistant robot in a public hospital. The estimated states of persons are used for the user-centered navigation behaviors, e.g., guiding or approaching a person, but also for realizing a socially acceptable navigation in public environments.

Highlights

  • In recent years, mobile interactive service robots have been developed to operate in private home environments as personal assistants, and in public places, such as airports [2] and office buildings [3], as receptionists [4] and guides [5]

  • The work presented here was part of the research project ROGER (RObot-assisted Gait training in orthopEdic Rehabilitiation) [7] in which we developed a rehabilitation robot assisting patients to recover their physiological gait after an orthopedic surgery

  • Each modality—face, pose, position, etc.—is tracked by an individual multi-hypotheses tracker, each sharing a global set of Hypothesis IDs (HIDs)

Read more

Summary

Introduction

Mobile interactive service robots have been developed to operate in private home environments as personal assistants (see [1] for a recent survey on home service robots), and in public places, such as airports [2] and office buildings [3], as receptionists [4] and guides [5] For such systems, adequate perception skills regarding the persons in the robot’s proximity are essential to fulfill their individual tasks. Sensors 2020, 20, 722 physiological gait pattern, or are given positive feedback when they walk without incidents To this end, the robot has to accompany the patients during their self-training and analyze the gait in real-time. A correct prediction of the movements and intents of those persons requires the analysis of their body poses (standing or sitting), their movement directions and their body orientations in the environment All those properties have to be modeled, even if the robot’s sensors are not focusing on the respective persons. A benchmark on a published multi-modal dataset shows the improvement of tracking consistency when individual features are added to the standard position tracker

Related Work
Sensor Fusion in Mobile Robot Person Tracking
Multi Target Tracking
Out of Sequence Measurements in Online Tracking
System Overview
Detection Modules
Position in 3D World Coordinates
Posture and Orientation
Re-Identification
Multimodal Tracking Framework
Belief Representation in the Individual Tacker Modules
Position and Velocity
Face and Color Features
Experimental Results
Benchmark on Labeled Dataset
Real-World User Trials
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.