Abstract

To enable navigation of miniature aerial vehicles (MAVs) with a low-quality inertial measurement unit (IMU), external sensors are typically fused with the information generated by the low-quality IMU. Most commercial systems for MAVs currently fuse GPS measurements with IMU information to navigate the MAV. However there are many scenarios in which an MAV might prove useful, but GPS is not available (e.g., indoors, urban terrain, etc.). Therefore several approaches have recently been introduced that couple information from an IMU with visual information (usually captured by an electro-optical camera). In general the methods for fusing visual information with an IMU utilizes one of two techniques: 1) applying rigid body constraints on where landmarks should appear in a set of two images (constraint-based fusion) or 2) simultaneously estimating the location of features that are observed by the camera (mapping) and the location of the camera (simultaneous localization and mapping-SLAM-based fusion). While each technique has some nuances associated with its implementation in a true MAV environment (i.e., computational requirements, real-time implementation, feature tracking, etc.), this paper focuses solely on answering the question "Which fusion technique (constraint- or SLAM-based) enables more accurate long-term MAV navigation?" To answer this question, specific implementations of a constraint- and SLAM-based fusion technique, with novel modifications for improved results on MAVs, are described. A basic simulation environment is used to perform a comparison of the constraint- and SLAM-based fusion methods. We demonstrate the superiority of SLAM-based techniques in specific MAV flight scenarios and discuss the relative weaknesses and strengths of each fusion approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call