Abstract

Recent advancements in 3D data capture have enabled the real-time acquisition of high-resolution 3D range data, even in mobile devices. However, this type of high bit-depth data remains difficult to efficiently transmit over a standard broadband connection. The most successful techniques for tackling this data problem thus far have been image-based depth encoding schemes that leverage modern image and video codecs. To our knowledge, no published work has directly optimized the end-to-end losses of a depth encoding scheme sandwiched around a lossy image compression codec. We present N-DEPTH, a compression-resilient neural depth encoding method that leverages deep learning to efficiently encode depth maps into 24-bit RGB representations that minimize end-to-end depth reconstruction errors when compressed with JPEG. N-DEPTH’s learned robustness to lossy compression expands to video codecs as well. Compared to an existing state-of-the-art encoding method, N-DEPTH achieves smaller file sizes and lower errors across a large range of compression qualities, in both image (JPEG) and video (H.264) formats. For example, reconstructions from N-DEPTH encodings stored with JPEG had dramatically lower error while still offering 29.8%-smaller file sizes. When H.264 video was used to target a 10 Mbps bit rate, N-DEPTH reconstructions had 85.1%-lower root mean square error (RMSE) and 15.3%-lower mean absolute error (MAE). Overall, our method offers an efficient and robust solution for emerging 3D streaming and 3D telepresence applications, enabling high-quality 3D depth data storage and transmission.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.