Abstract

In this paper, we propose a wide-angle view synthesis system based on captured multiple images and depth maps. We use a two-by-two array of Kinect v2 cameras to facilitate virtual camera tilt and zoom-in cases. We integrate two essential components developed by our lab colleagues to form a complete and practical system Two additional elements are designed and implemented in this study. The first element is an improved multi-view blending algorithm that provides a clear improvement on the synthesized image quality even though the warping process induces some artifacts. By using four RGB-D cameras to capture the reference views, we have sufficient information to decide which pixel in the warped views is erroneous and we replace it by choosing the correct virtual color pixel from the other warped views to generate the synthesized pixel. The second element is solving the asynchronization problem among multiple Kinects in capturing images/videos. For simplicity, our first attempt is implementing a clock calibration system based on PC clock synchronization software. We examine the quality of synthesized views for various camera tilt and zoom-in/out cases. Experimental results show that the proposed multi-view blending algorithm achieves good synthesized image subjective quality on the real world images captured by four Kinect v2 sensors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call