Prior works on multi-projector displays have focused primarily on static rigid objects, some focusing on dynamic rigid objects. However, works on projection based displays on deformable dynamic objects have focused only on small scale single projector displays. Tracking a deformable dynamic surface and updating projections precisely in real time on it is a significantly challenging task, even for a single projector system. In this paper, we present the first end-to-end solution for achieving a real-time, seamless display on deformable surfaces using mutliple unsychronized projectors without requiring any prior knowledge of the surface or device parameters. The system first accurately calibrates multiple RGB-D cameras and projectors using the deformable display surface itself, and then using those calibrated devices, tracks the continuous changes in the surface shape. Based on the deformation and projector calibration, the system warps and blends the image content in real-time to create a seamless display on a surface that continuously changes shape. Using multiple projectors and RGB-D cameras, we provide the much desired aspect of scale to the displays on deformable surfaces. Most prior dynamic multi-projector systems assume rigid objects and depend critically on the constancy of surface normals and non-existence of local shape deformations. These assumptions break in deformable surfaces making prior techniques inapplicable. Point-based correspondences become inadequate for calibration, exacerbated with no synchronization between the projectors. A few works address non-rigid objects with several restrictions like targeting semi-deformable surfaces (e.g. human face), or using single coaxial (optically aligned) projector-camera pairs, or temporally synchronized cameras. We break loose from such restrictions and handle multiple projector systems for dynamic deformable fabric-like objects using temporally unsynchronized devices. We devise novel methods using ray and plane-based constraints imposed by the pinhole camera model to address these issues and design new blending methods dependent on 3D distances suitable for deformable surfaces. Finally, unlike all prior work with rigid dynamic surfaces that use a single RGB-D camera, we devise a method that involve all RGB-D cameras for tracking since the surface is not seen completely by a single camera. These methods enable a seamless display at scale in the presence of continuous movements and deformations. This work has tremendous applications on mobile and expeditionary systems where environmentals (e.g. wind, vibrations, suction) cannot be avoided. One can create large displays on tent walls in remote, austere military or emergency operations in minutes to support large scale command and control, mission rehearsal or training operations. It can be used to create displays on mobile and inflatable objects for tradeshows/events and touring edutainment applications.
Read full abstract