Abstract

With the advancement of media and computing technologies, video compositing techniques have improved to a great extent. These techniques have been used not only in the entertainment industry but also in advertisement and new media. Match-moving is a cinematic technology in virtual-real image synthesis that allows the insertion of computer graphics (virtual objects) into real world scenes. To make a realistic virtual-real image synthesis, it is important to obtain internal parameters (such as focal length) and external parameters (position and rotation) from an Red-Green-Blue(RGB) camera. Conventional methods recover these parameters by extracting feature points from recorded video frames to guide the virtual camera. These methods fail when there is occlusion or motion blur in the recorded scene. In this paper, we propose a novel method (system) for pre-visualization and virtual-real image synthesis that overcomes the limitations of conventional methods. This system uses the spatial understanding principle of Microsoft HoloLens to perform the match-moving of virtual-real video scenes. Experimental results demonstrate that our system is much more accurate and efficient than existing systems for video compositing.

Highlights

  • Virtual-real synthesis techniques are used for video production in the media industry because they are able to express many images that cannot be expressed in the real world

  • We propose a novel method for pre-visualization and virtual-real image synthesis that overcomes the limitations of conventional methods

  • We propose a stabilized match-moving system and a pre-visualization system using HoloLens based on these previous studies

Read more

Summary

Introduction

Virtual-real synthesis techniques are used for video production in the media industry because they are able to express many images that cannot be expressed in the real world These techniques can deliver a realistic experience as well as reduction in production cost. Virtual-real synthesis refers to the insertion of computer graphics into a real world scene with precise position, scale, and rotation information. Typical examples are simultaneous localization and mapping (SLAM) [4,5,6] and structure-from-motion (SfM) [7,8] These methods extract the external parameter of the camera. The obtained external parameter can be created as a virtual camera in a 3D program such as 3ds Max

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.