Abstract

The vision‐based mobile robot's simultaneous localisation and mapping and navigation capability in dynamic environments are highly problematic elements of robot vision applications. The goal of this study is to reconstruct a static map and track the dynamic object for a camera and laser scanner system. An improved automatic calibration is designed to merge image and laser point clouds. Then, the fusion data is exploited to detect the slowly moved object and reconstruct static map. Tracking‐by‐detection requires the correct assignment of noisy detection results to object trajectories. In the proposed method, occluded regions are combined 3D motion models with object appearance to manage difficulties in crowded scenes. The proposed method was validated by experimental results gathered in a real environment and on publicly available data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call