Abstract

This paper presents a laser-based tracking (estimation of pose and size) of moving objects using multiple mobile robots in Global-navigation-satellite-system (GNSS)-denied environments. Each robot is equipped with a multilayer laser scanner and detects moving objects, such as people, cars, and bicycles, in its own laser-scanned images by applying an occupancy-grid-based method. It then sends measurement information related to the moving objects to a central server. The central server estimates the objects’ poses (positions and velocities) and sizes from the measurement information using Bayesian filter. In this cooperative-tracking method, the nearby robots always share their tracking information, allowing tracking of invisible or partially visible objects. To perform a reliable cooperative tracking, robots have to correctly identify their relative pose. To do so in GNSS-denied environments, the relative pose is estimated by scan matching using laser measurements captured by both sensor nodes. Such cooperative scan matching is performed by 4-points-congruent-sets (4PCS)-matching-based coarse registration and Iterative-closest-point (ICP)-based fine registration methods. The experimental results of tracking a car, a motorcycle, and a pedestrian with two robots in an outdoor GNSS-denied environment validate the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.