Virtual production, a filmmaking technique that seamlessly merges virtual and real cinematography, has revolutionized the film and television industry. However, traditional virtual production requires the setup of green screens, which can be both costly and cumbersome. We have developed a green screen-free virtual production system that incorporates a 3D tracker for camera tracking, enabling the compositing of virtual and real-world images from a moving camera with varying perspectives. To address the core issue of video matting in virtual production, we introduce a novel Boundary-Selective Fusion (BSF) technique that combines the alpha mattes generated by deep learning-based and depth-based approaches, leveraging their complementary strengths. Experimental results demonstrate that this combined alpha matte is more accurate and robust than those produced by either method alone. Overall, the proposed BSF technique is competitive with state-of-the-art video matting methods, particularly in scenarios involving humans holding objects or other complex settings. The proposed system enables real-time previewing of composite footage during filmmaking, reducing the costs associated with green screen setups and simplifying the compositing process of virtual and real images.
Read full abstract