Abstract

Location retrieval based on visual information is to retrieve the location of an agent (e.g. human, robot) or the area they see by comparing their observations with a certain representation of the environment. Existing methods generally treat the problem as a content-based image retrieval problem and have demonstrated promising results in terms of localization accuracy. However, these methods are challenging to scale up due to the volume of reference data involved; and the image descriptions might not be easily understandable/communicable for humans to describe surroundings. Considering that humans often use less precise but easily produced qualitative spatial language and high-level semantic landmarks when describing an environment, a coarse-to-fine qualitative location retrieval method is proposed in this work to quickly narrow down the initial location of an agent by exploiting the available information in large-scale open data. This approach describes and indexes a location/place using the perceived qualitative spatial relations between ordered pairs of co-visible landmarks from the perspective of viewers, termed as ‘qualitative place signatures’ (QPS). The usability and effectiveness of the proposed method were evaluated using openly available datasets, together with simulated observations by considering different types perception errors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call