Abstract

Through this paper, we aim at investigating the impact of using deep learning-based technologies such as super-resolution on Holoscopic 3D (H3D) images. Holoscopic 3D imaging is a technology that aims at providing cost-effective alternatives for 3D content viewing and consumption without requiring a special headgear or posture. The technique is using a special lens array fitted to standard DSLR or mirrorless cameras to generate or capture 3D content. The output is a Holoscopic 3D image that can be displayed in lightfield displays or Multiview displays following a post-processing procedure. The main advantage of this technique is its cost-effectiveness in viewing and interacting with 3D content. However, one of its drawbacks is the low spatial density of the commercial cameras CMOS sensors and the lens induced imperfections. The latter can be fixed in software using some distortion correction techniques. However, the former is still challenging in terms of techniques that result in naturally looking output. Mitigating such issues with hardware will lead to higher costs and the technique loses its main advantage. Our approach consists of designing a framework that leverages software tools in order to upscale the output of H3D cameras whilst solving the low spatial density problem of H3D images. We also investigate the impact of deep learning-based video motion interpolation on the output quality of the cultural H3D imaging framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call