Abstract

AbstractIn this paper, a new approach to visual geo‐localization for natural environments is proposed. The digital elevation model (DEM) data in virtual space is rendered and construct a panoramic skyline database is constructed. By combining the skyline database with real‐world image data (used as the “queries” to be localized), visual geo‐localization is treated as a cross‐modal image retrieval problem for panoramic skyline images, creating a unique new visual geo‐localization benchmark for the natural environment. Specifically, the semantic segmentation model named LineNet is proposed, for skyline extractions from query images, which has proven to be robust to a variety of complex natural environments. On the aforementioned benchmarks, the fully automatic method is elaborated for large‐scale cross‐modal localization using panoramic skyline images. Finally, the compound index is delicately designed to reduce the storage space of the positioning global descriptors and improve the retrieval efficiency. Moreover, the proposed method is proven to outperform most state‐of‐the‐art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.