Abstract

Subspace appearance models are widely used in computer vision and image processing tasks to compactly represent the appearance variations of target objects. In order to ensure algorithm performance, they are typically stored in high-precision formats; this results in a large storage footprint, rendering redistribution costly and difficult. Since for most image and vision applications, pixel values are quantized to 8 bits by the acquisition apparatuses, we show that it is possible to construct a fixed-width, effectively lossless representation of the bases vectors, in the sense that reconstructions from the original bases and from the quantized bases never deviate by more than half of the quantization step-size. In addition to directly applying this result to losslessly compress individual models, we also propose an algorithm to compress appearance models by utilizing prior information on the modeled objects in the form of prior appearance subspaces. Experiments conducted on the compression of person-specific face appearance models demonstrate the effectiveness of the proposed algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call