Abstract
High quality view synthesis is a prerequisite for future free-viewpoint television. It will enable viewers to move freely in a dynamic real world scene. Depth image based rendering algorithms will play a pivotal role when synthesizing an arbitrary number of novel views by using a subset of captured views and corresponding depth maps only. Usually, each depth map is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery. First, our approach classifies the color information in the multiview color imagery by modeling color with a mixture of Dirichlet distributions where the model parameters are estimated in a Bayesian framework with variational inference. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further sub-clustering. Finally, the resulting mean of each sub-cluster is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the average quality of virtual views by up to 0.8 dB when compared to views synthesized by using conventionally estimated depth maps.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have