Abstract

Despite 3D object reconstruction using a single perspective being a rapidly developing field, the majority of research is focused around a single static object reconstruction from a synthetically generated dataset. This leaves a major knowledge gap when it comes to morphing 3D object reconstruction from an imperfect real world frame. As a solution to this problem, we introduce a three-staged deep auto-refining adversarial neural network architecture that can denoise and refine real-world depth sensor data captured using Intel Realsense devices for a full human body posture reconstruction. The proposed solution was able to achieve results which are on par with other state-of-the-art approaches in both Earth Mover’s and Chamfer distances, 0.059 and 0.079 respectively, while having the benefit of reconstructing from mask-less real world depth frames.With visual inspection of the reconstructed point-cloud suggesting great adaptation capabilities to the majority of real world depth sensor noise deformities for both LiDAR and structured light depth sensors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call