Abstract
There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth<sup>*</sup> build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. <br><br> The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. <br><br> <sup>*</sup> The algorithm is independent of the source of satellite imagery and another provider can be used
Highlights
Unmanned aerial vehicles (UAVs) are currently seen as an optimal solution for intelligence, surveillance and reconnaissance (ISR) missions of the generation
The autonomous map-aided visual navigation system proposed in this paper combines intensity and frequency-based segmentation, high-level feature extraction and feature pattern matching to achieve reliable feature registration and generate the position and orientation innovations restricting the inertial drift of the onboard navigation system
We propose to distinguish between glare and other features similar in intensity based on the surrounding or neighbouring connected components and assign the glare region to the same class
Summary
Unmanned aerial vehicles (UAVs) are currently seen as an optimal solution for intelligence, surveillance and reconnaissance (ISR) missions of the generation. As the research in the field of remote sensing shows, high-level visually identifiable features, such as roads, rooves, water bodies etc., can be reliably extracted and used to update the map information or extract road networks from high-resolution satellite imagery. The basic concept behind this is the detection, extraction, localisation and matching of high-level features present in the aerial imagery (road network and its components, areas of greenery, water bodies etc.) by modelling them with minimal geometric characterisations used for storage and association. The focus of the current work has been on development of robust feature extraction and modelling that takes into account the a-priori knowledge about the road networks that. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-1, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic suits the requirements of the navigation system
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.