Abstract

Improving the accuracy of pose determination of UAVs in a Global Positioning System-denied (GPS-denied) indoor environment, visual Simultaneous Localization And Mapping (vSLAM) techniques have attracted more attention from scholars. RGB-D (Red Green Blue-Depth) camera is a widely used visual sensor as it can recognize unknown environments and behave accordingly, which is valuable for UAVs. However, high computational resources are often required in featureless and uncharacteristic indoor environments to achieve the robust performance of pose decisiveness. To solve this challenge, this paper presents a novel real-time visual compass to estimate the three Degree-of-Freedom (DoF) relative orientations of an RGB-D camera, where the surface-normals-based RANdom Sample Consensus Model (RANSAC) is integrated. In addition, the Manhattan World (MW) assumption is exploited to simplify perception tasks by giving a way of probabilistically modeling background clutter. The effectiveness of the proposed estimator is comprehensively evaluated based on depth datasets from different indoor man-made scenes. And a universal parameter setting of the model is determined. The experiments demonstrated the effectiveness of the visual compass with an average absolute rotation error under 2°. Additionally, the average time needed by the algorithm to process an image frame is under 20 ms, satisfying real-time requirements. The robust achievement by both accuracy and timing indicates that our outperformed state-of-the-art approaches can be more conducive to indoor UAV design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call