Abstract
Projector video compensation aims to cancel the geometric and photometric distortions caused by non-ideal projection surfaces and environments when projecting videos. Most existing projector compensation methods start by projecting and capturing a set of sampling images, followed by an offline compensation model training step. Thus, abundant user effort is required before the users can watch the video. Moreover, the sampling images have little prior knowledge of the video content and may lead to suboptimal results. To address these issues, this paper builds a video compensation system that can online adapt the compensation parameters. Our approach consists of five threads and can perform compensation, projection, capturing, and short-term and long-term model updates in parallel. Due to the parallel mechanism, rather than projecting and capturing hundreds of sampling images and training the model offline, we can directly use the projected and captured video frames for model updates on the fly. To quickly apply to the new environment, we introduce a deep learning-based compensation model that integrates a fixed transformer-based method and a novel CNN-based network. Moreover, for fast convergence and to reduce error accumulation during fine-tuning, we present a strategy that cooperates with short-term and long-term memory model updates. Experiments show that it significantly outperforms state-of-the-art baselines.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on visualization and computer graphics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.