Abstract

Film-based cameras were replaced with digital cameras in a very short period. Saving film costs for each shot is not only of advantage of this transition. The more important advantage by this digitization is that forming optically a completed image on detector is not necessary and acquired data are processable by a computer. We can think that, with optional optical system-lens array-, acquired data can be nuggets of information which contains both image and its depth or bunch of incident rays which are equal value to actual 3-D scene. These data represent 3-D situations and by computing them, various optical effect, e.g. focusing and changing field of depth, can also be realized or simulated virtually. My issue is to clarify how 3-D scene is transformed and expressed on 2-D detector and how this 2-D information is retrieved back again into 3-D scene through lens array and other optical devices. Light-field optics or lens array is one of those exchangeable methods of the transformation between actual 3-D scene and information encoded on 2-D plane. Recently new method for lens array was introduced; a circular zone plate instead of lens array. As for a 3-D Display which consists of flat display and lens array, the encoded data shown on a flat display and they are retrieved into real 3-D image by a lens-array. I will mention not only its advantage but its limitation of ability for 3-D image reconstruction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call