Abstract

For centuries, cameras were designed to closely mimic the human visual system. With the rapid increase in computer processing power over the last few decades, researchers in the vision, graphics and optics community have begun to focus their attention on new types of imaging systems that utilize computations as an integral part of the imaging process. Computational cameras optically encode information that is later decoded using signal processing. In this thesis, I show three new computational imaging designs that provide new functionality over conventional cameras. Each design has been rigorously analyzed, built and tested for performance. Each system has demonstrated an increase in functionality over tradition camera designs. The first two computational imaging systems, Diffusion Coding and Spectral Focal Sweep, provide a means to computationally extend the depth of field of an imaging system without sacrificing optical efficiency. These techniques can be used to preserve image detail when photographing scenes that span very large depth ranges. The final example, Gigapixel Computational Imaging, uses a computational approach to overcome limitations in spatial resolution that are caused by geometric aberrations in conventional cameras. While computational techniques can be used to increase optical efficiency, this comes at a cost. The cost incurred is noise amplification caused by the decoding process. Thus, to measure the real utility of a computational approach, we must weigh the benefit of increased optical efficiency against the cost of amplified noise. A complete treatment must take into account an accurate noise model. In some cases, the benefit may not outweigh the cost, and thus a computational approach has no value. This thesis concludes with a discussion on these scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call