Abstract

Modeling dynamic scenes is a challenging problem faced by applications such as digital content generation and motion analysis. Fast single-frame methods obtain sparse depth samples while multiple- frame methods often reply on the rigidity of the object to correspond a small number of consecutive shots for decoding the pattern by feature tracking. We present a novel structured-light acquisition method which can obtain dense depth and color samples for moving and deformable surfaces undergoing repetitive motions. Our key observation is that for repetitive motion, different views of the same motion state under different structured-light patterns can be corresponded together by image matching. These images densely encode an effectively static scene with time-multiplexed patterns that we can use for reconstruction of the time- varying scene. At the same time, color samples are reconstructed by matching images illuminated using white light to those using structured-light patterns. We demonstrate our approach using several real-world scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.