Abstract

An efficient method that converts 2D video sequences to 3D is presented. This method utilizes the motion information between consecutive frames to approximate the depth map of the scene. To estimate the depth map, the horizontal motion captured by a single camera is revised and then approximated as the displacement between the right and left frames captured by two cameras in a stereoscopic set-up case. To enhance the visual depth perception, a non-linear scaling model is then applied to the modified motion vectors. The low complexity of our approach and its compatibility with future 3D systems, allows real-time implementations at the receiver-end for no additional burden on the network. Performance evaluations show that our approach outperforms the existing H.264-based depth map estimation technique by 1.84 dB PSNR, providing more realistic depth representation of the scene. Moreover, the subjective comparison of results (obtained by viewers watching the generated stereo video sequences on a 3D display system) confirms the better performance of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call