Abstract

Underwater localization is a challenging task due to the lack of a Global Positioning System (GPS). However, the capability to match georeferenced aerial images and acoustic data can help with this task. Autonomous hybrid aerial and underwater vehicles also demand a new localization method capable of combining the perception from both environments. This study proposes a cross-domain and cross-view image matching, using a color aerial image and an underwater acoustic image to identify if these images are captured in the same place. The method is designed to match images acquired in partially structured environments with shared features, such as harbors and marinas. Our pipeline combines traditional image processing methods and deep neural network techniques. Real-world datasets from multiple regions are used to validate our work, obtaining a matching precision of up to 80%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call