Abstract

A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. Over the last decade, a range of computational cameras have been proposed, which use various optics designs to encode and use computation to decode useful visual information. What is often missing, however, is the quantitative analysis of the relation between camera design and the captured visual information, and little systematic work has been done to evaluate and optimize these computational camera designs. While the optics of computational cameras may be quite complicated, many of them can be effectively characterized by their point spread functions (PSFs): the intensity distribution on an image sensor as a response to a point light source in a scene. This thesis explores the techniques to characterize, evaluate and optimize computational cameras via PSF engineering for computer vision tasks, including image recovery, 3D reconstruction, and image refocusing. I first explore PSF engineering techniques to recover image details from blurry images. Image blur is a problem commonly seen in photography due to a number of reasons, including defocus, lens aberration, diffraction, atmospheric turbulence, object motion, and etc. Image blur is often formulated as a convolution of the latent blur-free image and a PSF, and deconvolution (or deblurring) techniques have to be used to recover image details from a blurry region. Here, I propose a comprehensive framework of PSF evaluation for the purpose of image deblurring, in which the effects of image noise, deblurring algorithm, and the structure of natural images are all taken into account. In defocus blur, the shape of defocus PSF is largely determined by the aperture pattern of the camera lens. By using an evaluation criterion derived from the comprehensive framework, I optimize the aperture pattern to preserve many more image details when defocus occurs. Both through simulations and experiments, I demonstrate the significant improvement gained by using optimized aperture patterns. While defocus blur causes a loss in image detail, it also encodes depth information of scenes in images. A typical depth from defocus (DFD) technique computes depth from two or more images that are captured with circular aperture lenses of different focus settings. Circular apertures produce circular defocus PSFs. In this thesis, I show that the use of a circular aperture severely restricts the accuracy of DFD, and propose a comprehensive framework of PSF evaluation for depth recovery. With this framework, we can derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. The two coded apertures are found to complement each other in the scene frequencies they preserve. With this property it becomes possible to not only recover depth with greater fidelity but also to obtain a high quality all-focused image from the two defocused images. While depth recovery can significantly benefit from optimized aperture patterns, its overall performance is rigidly limited by the lens aperture's physical size. To transcend this limitation, I propose a novel depth recovery technique using an optical diffuser—referred to as depth from diffusion (DFDiff). I show that DFDiff is analogous to conventional DFD, in which the scatter angle of the diffuser determines the system's effective aperture. High precision depth estimation can be achieved by choosing a proper diffuser and no longer requires the large lenses that DFD requires. Even a consumer camera with a low-end small lens can be used to do high-precision depth estimation when coupled with an optical diffuser. In my detailed analysis of the image formation properties of a DFDiff system, I show a number of examples demonstrating greater precision in depth estimation when using DFDiff. (Abstract shortened by UMI.)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call