Abstract
The paper presents a new method of depth estimation, dedicated for free-viewpoint television (FTV) and virtual navigation (VN). In this method, multiple arbitrarily positioned input views are simultaneously used to produce depth maps characterized by high inter-view and temporal consistencies. The estimation is performed for segments and their size is used to control the trade-off between the quality of depth maps and the processing time of depth estimation. Additionally, an original technique is proposed for the improvement of temporal consistency of depth maps. This technique uses the temporal prediction of depth, thus depth is estimated for P-type depth frames. For such depth frames, temporal consistency is high, whereas estimation complexity is relatively low. Similarly, as for video coding, I-type depth frames with no temporal depth prediction are used in order to achieve robustness. Moreover, we propose a novel parallelization technique that significantly reduces the estimation time. The method is implemented in C++ software that is provided together with this paper, so other researchers may use it as a new reference for their future works. In performed experiments, MPEG methodology was used whenever possible. The provided results demonstrate the advantages over the Depth Estimation Reference Software (DERS) developed by MPEG. The fidelity of a depth map, measured by the quality of synthesized views, is higher on average by 2.6 dB. This significant quality improvement is obtained despite a significant reduction of the estimation time, on average 4.5 times. The application of the proposed temporal consistency enhancement method increases this reduction to 29 times. Moreover, the proposed parallelization results in the reduction of the estimation time up to 130 times (using 6 threads). As there is no commonly accepted measure of the consistency of depth maps, the application of compression efficiency of depth is proposed as a measure of depth consistency.
Highlights
In free-viewpoint television (FTV) and Virtual Navigation (VN) [29], [41], on which we focus in this paper, a user can arbitrarily change her/his viewpoint at any time and is not limited to watch views acquired by cameras located around a scene
In the conducted experiments do we compare our method with the state-of-the-art graph-based depth estimation method Depth Estimation Reference Software (DERS) [7] (Section VI-A2), but we determine the performance of the presented method for different numbers of segments (Section VI-A3), and for different numbers of views used in the estimation (Section VI-A4)
We focus on the quality of free navigation for a user of the FTV system, in order to measure the increase of the temporal consistency of depth maps, synthesized virtual views are compressed with the HEVC encoder
Summary
In free-viewpoint television (FTV) and Virtual Navigation (VN) [29], [41], on which we focus in this paper, a user can arbitrarily change her/his viewpoint at any time and is not limited to watch views acquired by cameras located around a scene. The most commonly used spatial representation of 3D scenes are depth maps [39], which are widely used in the context of free-viewpoint television and virtual. In FTV and VN systems, the fidelity and quality of depth maps deeply influence the quality of the synthesized video, the quality of experience in the navigation through a 3D scene. The abovementioned problems limit the possible applications of depth sensors in FTV and VN systems, depth cameras and lidars have
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.