Abstract

This paper presents a hybrid scheme that integratedly uses self-similarity prior and deep convolutional neural network (CNN) fusion for compression artifact reduction in low bit-rate video applications. Based on the temporal correlation hypothesis, the self-similarity prior is extended to the temporal domain by using as references not only the current decoded frame but also its neighbouring frames. Furthermore, being cognizant of that the bicubic downsampling process can typically improve the perceptual quality of a video coded at low bit-rate, for each small patch in the current frame, we search for similar patches in down-scaled versions of these references, and then form several self-similarity prior based predictions by tiling these similar patches at corresponding positions. To further exploit information flow across scales, a deep CNN model is constructed that contains two sub-networks to estimate the final output. One sub-network takes the self-similarity prior based predictions along with the decoded frame itself; and the other takes the down-scaled versions of these frames as network input. Experimental results demonstrate that the proposed method can remarkably improve, both subjectively and objectively, quality of compressed video sequences of low bit-rates.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.