Abstract

Computational photography emerged as a multidisciplinary field at the intersection of optics, computer vision, and computer graphics, with the objective of acquiring richer representations of a scene than those that conventional cameras can capture. The basic idea is to somehow code the information before it reaches the sensor, so that a posterior decoding will yield the final image (or video, light field, focal stack, etc). We describe here two examples of computational photography. One deals with coded apertures for the problem of defocus deblurring, and is a classical example of this coding-decoding scheme. The other is an ultrafast imaging system, the first to be able to capture light propagation in macroscopic high resolution scenes at 0.5 trillion frames per second.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call