Abstract

A method for temporal prediction of image sequences is proposed. The motion vectors of conventional block-based motion compensation schemes are used to convey a mapping of a selected set of image points, instead of blocks, between the previous and the current image. The prediction is made by geometrically transforming, or warping, the previous image using the point pairs defined by the mapping as fixed points in the transformation. This method produces a prediction image without block artifacts and can compensate for many motion types where conventional block matching fails, such as scaling and rotation. It can also be incorporated in existing hybrid video compression systems with little additional complexity and few or no changes in the bit stream syntax. It is shown that a significant subjective improvement in the prediction as well as a consistent reduction in the objectively measured prediction error is obtained. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call