Abstract

Navigation requires self-localization, which GPS typically performs outdoors. However, GPS frequently introduces significant errors in environments where radio reflection occurs, such as in urban areas, which makes precise self-localization occasionally impossible. Meanwhile, a human can use street view images to understand their surroundings and their current location. To implement this, we must solve image matching between the current scene and the images in a street view database. However, because of the wide variations in the field angle, time, and season between images, standard pattern matching by feature is difficult. DeepMatching can precisely match images with various lighting and field angles. Nevertheless, DeepMatching tends to misinterpret street images because it may find unnecessary feature points in the road and sky. This study proposes several methods to reduce misjudgment: (1) gaining image similarity with features such as buildings by excluding the road and sky, and (2) splitting the panoramic image into four directions and matching in each. This study shows the results of each method and summarizes their performance using various images and resolutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call