Abstract

This paper describes a method of image generation based on transformation integrating certain sequences of multiple differently focused images. First, we assume that a scene is defocused by a geometrical blurring model. Then we combine spatial frequencies of the scene and the sequence with a 3-D convolution filter that expresses how the scene is defocused on the sequence. The filter can be represented with a linear combination of ray-sets through each point of the lens. Based on the relation, in the 3-D frequency domain we extract each ray-set from the filter as certain frequency components and merge them to reconstruct various filters that can generate images with different viewpoints and blurs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call