Dynamic Projection Mapping (DPM) necessitates geometric compensation of the projection image based on the position and orientation of moving objects. Additionally, the projector's shallow depth of field results in pronounced defocus blur even with minimal object movement. Achieving delay-free DPM with high image quality requires real-time implementation of geometric compensation and projector deblurring. To meet this demand, we propose a framework comprising two neural components: one for geometric compensation and another for projector deblurring. The former component warps the image by detecting the optical flow of each pixel in both the projection and captured images. The latter component performs real-time sharpening as needed. Ideally, our network's parameters should be trained on data acquired in an actual environment. However, training the network from scratch while executing DPM, which demands real-time image generation, is impractical. Therefore, the network must undergo pre-training. Unfortunately, there are no publicly available large real datasets for DPM due to the diverse image quality degradation patterns. To address this challenge, we propose a realistic synthetic data generation method that numerically models geometric distortion and defocus blur in real-world DPM. Through exhaustive experiments, we have confirmed that the model trained on the proposed dataset achieves projector deblurring in the presence of geometric distortions with a quality comparable to state-of-the-art methods.
Read full abstract