Abstract
We present a VQ-based technique for coding image data that adopts an analysis by synthesis approach. We define a new type of spatial interaction model for image data, called prediction pattern, which we use along with a excitation vector, to generate an approximation of an input block of pixels. A prediction pattern is a k× k array with each element representing a prediction scheme from a given set of predictors. A prediction pattern captures the spatial dependences present in an image block. Given a codebook of prediction patterns and a codebook of excitation vectors, we encode an image by partitioning it into blocks and for each block identifying the prediction pattern from within the codebook that best models the spatial dependences that are present in the block. We then search the excitation codebook for a code vector that in combination with the already chosen prediction pattern results in the synthesis of the closest approximation to the current image block. We present algorithms for codebook design and give implementation results. The proposed technique gives promising results.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have