Abstract

In recent years, depth cameras (such as Microsoft Kinect and ToF cameras) have gained much popularity in computer graphics, visual computing and virtual reality communities due to their low price and easy availability. While depth cameras (e.g. Microsoft Kinect) provide RGB images along with real-time depth information at high frame rate, the depth images often suffer from several artifacts due to inaccurate depth measurement. These artifacts highly degrade the visual quality of the depth frames. Most of these artifacts originate from two main sources—the missing/invalid depth values and fluctuating valid depth values on the generated contents. In this paper, we propose a new depth image enhancement method, for the contents of depth cameras, which addresses these two main sources of artifacts. We introduce a robust 1D Least Median of Squares (1D LMedS) approach to estimate the depth values of those pixels which have missing/invalid depth values. We use a sequence of frames to look for invalid depth values (considered as outliers), and finally, replace those values with stable and more plausible depth values. By doing so, our approach improves the unstable nature of valid depth values in captured scenes that is perceived as flickering. We use self-recorded and reference datasets along with reference methods to evaluate the performance of our proposed 1D LMedS. Experimental results show improvements both for static and moving parts of a scene.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call