Abstract

This paper proposes a generic video compression framework with low-quality video data and a learning-based approach, which is rooted in sparse representation for the ill-posed problem of video super-resolution reconstruction. It is regularized by the prior manifold only on the “primitive patches”, and each primitive patch is modeled by a sparse representation concerning an over-complete dictionary of trained set. Due to low intrinsic dimensionality of primitives, the number of samples in the dictionary can be greatly reduced. Considering the similar geometry of the manifolds of the feature spaces from the low-frequency and the high-frequency primitives, we hypothesize that the low-frequency and its corresponding high-frequency primitive patches share the same sparse representation structure. In this sense, high-resolution frame primitives are divided into low-frequency and high-frequency frame primitives, and high-frequency frame primitive patches can be synthesized from both the high-frequency primitive patch dictionary and the sparse structure of the corresponding low-frequency frame primitive patches. It does not involve with explicit motion estimation and any assistant information, and decomposes the original video sequence into key frames and low-resolution frames with low entropy. The corresponding high-resolution frames would be reconstructed by combining the high-frequency and the low-frequency patches with smoothness constraints and the backpro-jection process. Experimental results demonstrate the objective and subjective efficiency in comparison with H.264/AVC and existing super-resolution reconstruction approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.