Abstract

Lossy compression of image and video yields visually annoying artifacts including blocking, blurring, ringing, etc., especially at low bit rates. In-loop filtering techniques can reduce these artifacts, improve quality, and achieve coding gain accordingly. In this paper, we present a convolutional neural network (CNN) based in-loop filter for High Efficiency Video Coding (HEVC). First, we design a new CNN structure that is composed of multiple Variable-filter-size Residue-learning blocks, namely VRCNN-ext, for artifact reduction. VRCNN-ext is trained by natural images as well as their compressed versions at different quality levels. Second, we investigate a new in-loop filter based on the trained VRCNN-ext models. Specifically, we observed that using VRCNN-ext directly on the inter pictures is not effective. To solve this problem, we further train a classifier to decide whether to use VRCNN-ext for each coding unit (CU). The classifier makes decision based on the compressed information, thus avoiding the overhead bits to control the on/off of the CNN-based filter at the CU level. Experimental results show that our scheme achieves significant bits saving than the HEVC anchor, leading to on average 9.2%, 9.6% and 7.4% BD-rate reduction on the HEVC test sequences, under all-intra, low-delay B and random-access configurations, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call