Abstract

In this work, we present a novel scene description to perform large-scale localization using only geometric constraints. Our work extends compact world anchors with a search data structure to efficiently perform localization and pose estimation of mobile augmented reality devices across multiple platforms (eg., hololens, ipad). The algorithm uses a bag-of-words approach to characterize distinct scenes (eg., rooms). Since the individual scene representations rely on compact geometric (rather than appearance-based) features, the resulting search structure is very lightweight and fast, lending itself to deployment on mobile devices. We present a set of experiments demonstrating the accuracy, performance and scalability of our novel localization method. In addition, we describe several use cases demonstrating how efficient cross-platform localization facilitates sharing of augmented reality experiences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call