Abstract

Target-following mobile robots have gained attention in various industrial applications. This study proposes an ultra-wideband-based target localization method that provides highly accurate and robust target tracking performance for a following robot. Based on the least square approximation framework, the proposed method improves localization accuracy by compensating localization bias and high-frequency deviations component by component. Initial calibration method is proposed to measure the device-dependent localization bias, which enables a compensation of the bias error not only at the calibration points, but also at the any other points. An iterative complementary filter, which recursively produces optimal estimation for each timeframe as a weighted sum of previous and current estimation depending on the reliability of each estimation, is proposed to reduce the deviation of the localization error. The performance of the proposed method is validated using simulations and experiments. Both the magnitude and deviation of the localization error were significantly improved by up to 77 and 51%, respectively, compared with the previous method.

Highlights

  • Academic Editors: Recently, human-following mobile robots have been introduced to ease the burden of human operators in various applications [1,2,3]

  • The performance of the proposed method for each step was analyzed through the The performance of the proposed method for each step was through the e F )analyzed simulation

  • We proposed a component-wise error-correction method to improve the localization accuracy of a target-following mobile robot

Read more

Summary

Introduction

Academic Editors: Recently, human-following mobile robots have been introduced to ease the burden of human operators in various applications [1,2,3]. Robust and reliable human tracking is a key technology of following robots, and it enables mobile robots to operate in cooperation with humans. Camera vision was generally adopted in previous studies because it provides abundant scene information with relatively low cost [12]. Previous studies have proposed human-tracking methods that combine camera vision with depth camera or LiDAR data [13,14]. These sensor-fusion methods have a critical limitation because tracking failure can frequently occur in crowed environments when the camera loses the target when it is hidden by opaque obstacles

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.