Abstract

Camera pose tracking is a fundamental task in Augmented Reality (AR) applications. In this paper, we present CATCHA, a method to achieve camera pose tracking in cultural heritage interiors with rigorous conservatory policies. Our solution is real-time model-based camera tracking according to textured point cloud, regardless of its registration technique. We achieve this solution using orthographic model rendering that allows us to achieve real-time performance, regardless of point cloud density. Our developed algorithm is used to create a novel tool to help both cultural heritage restorers and individual visitors visually compare the actual state of a culture heritage location with its previously scanned state from the same point of view in real time. The provided application can directly achieve a frame rate of over 15 Hz on VGA frames on a mobile device and over 40 Hz using remote processing. The performance of our approach is evaluated using a model of the King’s Chinese Cabinet (Museum of King Jan III’s Palace at Wilanów, Warsaw, Poland) that was scanned in 2009 using the structured light technique and renovated and scanned again in 2015. Additional tests are performed on a model of the Al Fresco Cabinet in the same museum, scanned using a time-of-flight laser scanner.

Highlights

  • In Augmented Reality (AR), virtual objects are superimposed on a real-world view observed by a user

  • Inasmuch as our interest is in cultural heritage interiors, usually guarded by rigorous conservatory policies, we focus on a vision-based camera tracking algorithm, which does not require physical integration into the interior geometry, as compared with alternative solutions that employ external tracking devices

  • The aim of our work is to develop an AR system called Camera Tracking for Cultural Heritage Applications (CATCHA) with the following functionalities

Read more

Summary

Introduction

In Augmented Reality (AR), virtual objects are superimposed on a real-world view observed by a user. The main requirement for an immersive AR application is correct spatial alignment between the real-world objects and virtual ones, which means that the virtual objects must be placed in the same coordinate system as the real objects. A precise camera pose, relative to the world coordinate system, must be known. This process is often called camera pose estimation or camera tracking. Multiple camera tracking methods are available, such as magnetic, inertial, GPS-based, vision, and other methods [1]. Inasmuch as our interest is in cultural heritage interiors, usually guarded by rigorous conservatory policies, we focus on a vision-based camera tracking algorithm, which does not require physical integration into the interior geometry, as compared with alternative solutions that employ external tracking devices. We consider only the visible-light spectrum because the use of infrared light may be dangerous for pigments used to paint the walls [2]

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.