Abstract

When service robots work in human environments, unexpected and unknown moving people may deteriorate the convergence of robot localization or even cause failure localization if the environment is crowded. In this article, a multisensor observation localizability estimation method is proposed and implemented for supporting reliable robot localization in unstructured environments with low-cost sensors. The contribution of the approach is a strategy that combines noisy laser range-finder data and RGB-D data for estimating the dynamic localizability matrix in a probabilistic framework. By aligning two sensor frames, the unreliable part of the laser readings that hits unexpected moving people is fast extracted according to the output of a RGB-D-based human detector, so that the influence of unexpected moving people on laser observations can be explicitly factored out. The method is easy for implementation and is highly desirable to ensure robustness and real-time performance for long-term operation in populated environments. Comparative experiments are conducted and the results confirm the effectiveness and reliability of the proposed method in improving the localization accuracy and reliability in dynamic environments.

Highlights

  • Service robots have been increasingly designed to act autonomously over time in human environments.[1,2] Human environments have objects that are either permanent like walls, movable like tables and chairs, or moving like humans

  • It is recognized that when service robots are deployed to perform long-term operation in populated areas, unexpected moving objects will pose a challenge to the reliability and accuracy of mobile robot self-localization

  • By aligning data acquired from a Kinect sensor and a laser range finder (LRF) equipped on the robot, the outputs of a RGB-D-based human detector are utilized for estimating the probability of laser scanner data that corresponds to unexpected objects

Read more

Summary

Introduction

Service robots have been increasingly designed to act autonomously over time in human environments.[1,2] Human environments have objects that are either permanent like walls, movable like tables and chairs, or moving like humans. Assuming that the major causes of unexpected and moving objects in robot’s working environment are humans, it’s desirable to apply RGB-D sensors (Kinect) for human detection[18,19,20] and autonomous robot navigation.[21,22] If we compare the characteristic of laser sensors and Kinect sensor in this context, RGB-D-based human detection is much more efficient[20] than using LRF. A fundamental problem is that we must resort to an elaborate technique to properly combine the advantages of LRF and RGB-D sensors for fast and reliable robot localization in dynamic environment For this purpose, in this article a real-time observation localizability estimation method for robot localization in unstructured environments is proposed that uses noisy and multi-cue information. The ‘‘Method overview’’ section introduces the outline of the proposed method, ‘‘Estimating localizability using multisensor

Rectify the particle distribution with observation update
Method overview
Dynamic localizability matrix
Xn i
ÁrB riE
Robot pose update
System implementation and experiment
MSE in Y direction
Structural environment scenario
Unstructured environment scenario
Classic method
Average sucess rate
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call