Abstract

In this paper, we present a visual simultaneous localization and mapping (SLAM) system which integrates measurements from multiple cameras to achieve robust pose tracking for autonomous navigation of micro aerial vehicles (MAVs) in unknown complex environments. We analyze the iterative optimizations for pose tracking and map refinement of visual SLAM in multi-camera cases. The analysis ensures the soundness and accuracy of each optimization update. A well-known monocular visual SLAM system is extended to utilize two cameras with non-overlapping fields of view (FOVs) in the final implementation. The resulting visual SLAM system enables autonomous navigation of an MAV in complex scenarios. The theory behind this system can easily be extended to multi-camera configurations, when the onboard computational capability allows this. For operations in large-scale environments, we modify the resulting visual SLAM system to be a constant-time robust visual odometry. To form a full visual SLAM system, we further implement an efficient back-end for loop closing. The back-end maintains a keyframe-based global map, which is also used for loop-closure detection. An adaptive-window pose-graph optimization method is proposed to refine keyframe poses of the global map and thus correct pose drift that is inherent in the visual odometry. We demonstrate the efficiency of the proposed visual SLAM system for applications onboard of MAVs in experiments with both autonomous and manual flights. The pose tracking results are compared with ground truth data provided by an external tracking system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call