Abstract

A mathematical model for a typical CCD camera system used in machine vision applications is presented. This model is useful in research and development of machine vision systems and in the computer simulation of camera systems. The model has been developed with the intention of using it to investigate algorithms for recovering depth from image blur. However the model is general and can be used to address other problems in machine vision. The model is based on a precise definition of input to the camera system. This definition decouple3 the photometric properties of a scene from the geometric properties of the scene in the input to the camera system. An ordered sequence of about 20 operations are defined which transform the camera system''s input to its output i. e. digital image data. Each operation in the sequence usually defines the effect of one component of the camera system on the input. This model underscores the complexity of the actual imaging process which is routinely underestimated and oversimplified in machine vision research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call