The stable flight of drones relies on Global Navigation Satellite Systems (GNSS). However, in complex environments, GNSS signals are prone to interference, leading to flight instability. Inspired by cross-view machine learning, this paper introduces the VDUAV dataset and designs the VRLM network architecture, opening new avenues for cross-view geolocation. First, to address the limitations of traditional datasets with limited scenarios, we propose the VDUAV dataset. By leveraging the virtual-real mapping of latitude and longitude coordinates, we establish a digital twin platform that incorporates 3D models of real-world environments. This platform facilitates the creation of the VDUAV dataset for cross-view drone localization, significantly reducing the cost of dataset production. Second, we introduce a new baseline model for cross-view matching, the Virtual Reality Localization Method (VRLM). The model uses FocalNet as its backbone and extracts multi-scale features from both drone and satellite images through two separate branches. These features are then fused using a Similarity Computation and Feature Fusion (SCFF) module. By applying a weighted fusion of multi-scale features, the model preserves critical distinguishing features in the images, leading to substantial improvements in both processing speed and localization accuracy. Experimental results demonstrate that the VRLM model outperforms FPI on the VDUAV dataset, achieving an accuracy increase to 83.35% on the MA@20 metric and a precision of 74.13% on the RDS metric.
Read full abstract