Abstract

The camera is an attractive device for use in beyond visual line of sight drone operation since cameras are low in size, weight, power, and cost. However, state-of-the-art visual localization algorithms have trouble matching visual data that have significantly different appearances due to changes in illumination or viewpoint. This article presents iSimLoc, a learning-based global relocalization approach that is robust to appearance and viewpoint differences. The features learned by iSimLoc's place recognition network can be utilized to match query images to reference images of a different stylistic domain and viewpoint. In addition, our hierarchical global relocalization module searches in a coarse-to-fine manner, allowing iSimLoc to perform fast and accurate pose estimation. We evaluate our method on a dataset with appearance variations and a dataset that focuses on demonstrating large-scale matching over a long flight over complex terrain. iSimLoc achieves 88.7% and 83.8% successful retrieval rates on our two datasets, with 1.5 s inference time, compared to 45.8% and 39.7% using the next best method. These results demonstrate robust localization in a range of environments and conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call