Abstract
Cross-view geolocalization matches the same target in different images from various views, such as views of unmanned aerial vehicles (UAVs) and satellites, which is a key technology for UAVs to autonomously locate and navigate without a positioning system (e.g., GPS and GNSS). The most challenging aspect in this area is the shifting of targets and nonuniform scales among different views. Published methods focus on extracting coarse features from parts of images, but neglect the relationship between different views, and the influence of scale and shifting. To bridge this gap, an effective network is proposed with well-designed structures, referred to as multiscale block attention (MSBA), based on a local pattern network. MSBA cuts images into several parts with different scales, among which self-attention is applied to make feature extraction more efficient. The features of different views are extracted by a multibranch structure, which was designed to make different branches learn from each other, leading to a more subtle relationship between views. The method was implemented with the newest UAV-based geolocalization dataset. Compared with the existing state-of-the-art (SOTA) method, MSBA accuracy improved by almost 10% when the inference time was equal to that of the SOTA method; when the accuracy of MSBA was the same as that of the SOTA method, inference time was shortened by 30%.
Highlights
Block Attention (BA) is formulated as follows: where g j stands for the global feature map, and Pmax stands for the max pooling operation
When an image with 384 × 384 size was used as input, the method achieved 86.61% R@1 accuracy and 88.55% average precision (AP), and 92.15% R@1 accuracy and
The performance of the method greatly surpassed that of existing competitive models, which was nearly 10% higher than that of the existing best-performing method, i.e., local pattern network (LPN) in some indicators
Summary
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. To bridge the gap between images of different views, the Siamese network was proposed [12], which is helpful for a model to learn viewpoint-invariant features. Shi et al [18] applied polar transform to warp aerial images, and realize the alignment between aerial and ground views They designed a DSM method [19] by adopting a dynamic similarity-matching network to estimate cross-view orientation alignment during localization. In a recent work, inspired by partition strategies [24,25,26,27], Wang et al [28] proposed a local pattern network (LPN) concentrating on matching drone- and satellite-view images based on University-1652. Existing hard part-based representation learning strategies ignore the offset and scale of the location, and there are few attention mechanisms designed for cross-view geolocalization. When inference time was almost the same, the accuracy of the model was much ahead of that of the SOTA method (see Section 3)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.