Abstract

A technique for synthesising novel views of an object or scene from a linear combination of basis images, originally proposed by Ullman and Basri, is briefly reviewed, extended and evaluated in a series of experiments on simple test objects. A symmetric, but overcomplete set of linear equations relating a small number of control points in the novel view to corresponding points in the basis images is used to calculate the geometry of the object as seen in the novel view. The image intensity is then calculated from a rendering model based on the distance of the novel from the basis views. Comparison of synthesised and actual images of the objects shows that the reconstructed image geometry and intensity are both accurate unless perspective effects are large. The use of an overcomplete set of linear equations to calculate the reconstructed image geometry does not lead to stability problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call