Abstract

The ability to predict a person’s trajectory and recover a target person in the event the target moves out of the field of view of the robot’s camera is an important requirement for mobile robots designed to follow a specific person in the workspace. This paper describes an extended work of an online learning framework for trajectory prediction and recovery, integrated with a deep learning-based person-following system. The proposed framework first detects and tracks persons in real time using the single-shot multibox detector deep neural network. It then estimates the real-world positions of the persons by using a point cloud and identifies the target person to be followed by extracting the clothes color using the hue-saturation-value model. The framework allows the robot to learn online the target trajectory prediction according to the historical path of the target person. The global and local path planners create robot trajectories that follow the target while avoiding static and dynamic obstacles, all of which are elaborately designed in the state machine control. We conducted intensive experiments in a realistic environment with multiple people and sharp corners behind which the target person may quickly disappear. The experimental results demonstrated the effectiveness and practicability of the proposed framework in the given environment.

Highlights

  • Mobile robots that accompany people may soon become popular devices similar to smartphones in every day life with their increasing use in personal and public service tasks across different environments such as homes, airports, hotels, markets, and hospitals [1]

  • The contributions made in this paper can be summarized as follows: First, we present a novel method for a robot to recover to the tracking state when the target person disappears, by predicting the target’s future path based on the past trajectory and planning the robot’s movement path

  • When the person turns at the corners, he/she may quickly disappear from the field of view (FoV) of the camera

Read more

Summary

Introduction

Mobile robots that accompany people may soon become popular devices similar to smartphones in every day life with their increasing use in personal and public service tasks across different environments such as homes, airports, hotels, markets, and hospitals [1]. Recent advances in artificial intelligence techniques and computing capability have allowed a high level of understanding comparable to that of humans in certain applications. The employment of these advances in robotic systems to enable the completion of more intelligent tasks is an interesting development [3]. Many challenges arise in a variety of scenarios when the person moves out of the field of view (FoV) of the camera when the person turns a corner, becomes occluded by other objects, or makes a sudden change in her/his movement This disappearance may lead the robot to stop and wait in its location until the target person returns to the robot’s FoV. The FRP happens when the robots believe the environment to be unsafe, i.e., every path is expected to collide with an obstacle due to massive uncertainty [6]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call