Abstract

With the availability of 360-degree cameras, 360-degree videos have become popular recently. To attach a virtual tag on a physical object in 360-degree videos for augmented reality applications, automatic object tracking is required so the virtual tag can follow its corresponding physical object in 360-degree videos. Relative to ordinary videos, 360-degree videos in an equirectangular format have special characteristics such as viewpoint change, occlusion, deformation, lighting change, scale change, and camera shakiness. Tracking algorithms designed for ordinary videos may not work well on 360-degree videos. Therefore, we thoroughly evaluate the performance of eight modern trackers in terms of accuracy and speed on 360-degree videos. The pros and cons of these trackers on 360-degree videos are discussed. Possible improvements to adapt these trackers to 360-degree videos are also suggested. Finally, we provide a dataset containing nine 360-degree videos with ground truth of target positions as a benchmark for future research.

Highlights

  • Nowadays, 360-degree videos are becoming more and more popular

  • For augmented reality applications using 360-degree videos, a common request is to register a virtual tag to a physical target

  • Instead of relying on inertial measurement units (IMUs) for human tracking, this paper focuses on vision-based methods for unknown object tracking

Read more

Summary

Introduction

Omnidirectional cameras, called 360-degree cameras, are widely available and more lightweight, and can even be installed on drones [1]. They are useful for recording indoor or outdoor activities to cover views in all perspectives. A virtual billboard marked in red color must follow its corresponding physical target over time. For this purpose, automatic tracking of a specific target in 360-degree videos is highly desirable. Adopted 360-degree interactive video to create evaluation scenarios where users can select their point of view during playback. Huang et al [11] presented an automatic approach to generate spatial audio for panorama images based on object detection and action recognition

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.