Abstract

We propose a novel, content adaptive method for motion-compensated three-dimensional wavelet transformation (MC 3-D DWT) of video. The proposed method overcomes problems of ghosting and nonaligned aliasing artifacts which can arise in regions of motion model failure, when the video is reconstructed at reduced temporal or spatial resolutions. Previous MC 3-D DWT structures either take the form of MC temporal DWT followed by a spatial transform ("t+2D"), or perform the spatial transform first ("2D + t"), limiting the spatial frequencies which can be jointly compensated in the temporal transform, and hence limiting the compression efficiency. When the motion model fails, the "t + 2D" structure causes nonaligned aliasing artifacts in reduced spatial resolution sequences. Essentially, the proposed transform continuously adapts itself between the "t + 2D" and "2D + t" structures, based on information available within the compressed bit stream. Ghosting artifacts may also appear in reduced frame-rate sequences due to temporal low-pass filtering along invalid motion trajectories. To avoid the ghosting artifacts, we continuously select between different low-pass temporal filters, based on the estimated accuracy of the motion model. Experimental results indicate that the proposed adaptive transform preserves high compression efficiency while substantially improving the quality of reduced spatial and temporal resolution sequences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call