Abstract

This paper proposes a novel strategy for depth video denoising in RGBD camera systems. Depth map sequences obtained by state-of-the-art Time-of-Flight sensors suffer from high temporal noise. Hence, all high-level RGB video renderings based on the accompanied depth maps' 3D geometry like augmented reality applications will have severe temporal flickering artifacts. The authors approached this limitation by decoupling depth map upscaling from the temporal denoising step. Thereby, denoising is processed on raw pixels including uncorrelated pixel-wise noise distributions. The authors' denoising methodology utilizes joint sparse 3D transform-domain collaborative filtering. Therein, they extract RGB texture information to yield a more stable and accurate highly sparse 3D depth block representation for the consecutive shrinkage operation. They show the effectiveness of our method on real RGBD camera data and on a publicly available synthetic data set. The evaluation reveals that the authors' method is superior to state-of-the-art methods. Their method delivers flicker-free depth video streams for future applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.