Abstract

This paper proposes a novel approach to map-based navigation for unmanned aircraft. The proposed approach employs pattern matching of ground objects, not feature-to-feature or image-to-image matching, between an aerial image and a map database. Deep learning-based object detection converts the ground objects into labeled points, and the objects’ configuration is used to find the corresponding location in the map database. Using the deep learning technique as a tool for extracting high-level features reduces the image-based localization problem to a pattern-matching problem. The pattern-matching algorithm proposed in this paper does not require altitude information or a camera model to estimate the horizontal geographical coordinates of the vehicle. Moreover, it requires significantly less storage because the map database is represented as a set of tuples, each consisting of a label, latitude, and longitude. Probabilistic data fusion with the inertial measurements by the Kalman filter is incorporated to deliver a comprehensive navigational solution. Flight experiments demonstrate the effectiveness of the proposed system in real-world environments. The map-based navigation system successfully provides the position estimates with RMSEs within 3.5 m at heights over 90 m without the aid of the GNSS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.