Abstract

We present a recognition-driven navigation system for large-scale 3D virtual environments. The proposed system contains three parts, virtual environment reconstruction, feature database building and recognition-based navigation. The virtual environment is reconstructed automatically with LIDAR data and aerial images. The feature database is composed of image patches with features and registered location and orientation information. The database images are taken at different distances from the scenes with various viewing angles, and these images are then partitioned into smaller patches. When a user navigates the real world with a handheld camera, the captured image is used to estimate its location and orientation. These location and orientation information are also reflected in the virtual environment. With the proposed patch approach, the recognition is robust to large occlusions and can be done in real time. Experiments show that our proposed navigation system is efficient and well synchronized with real world navigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call