Lossy image coding techniques usually result in various undesirable compression artifacts. Recently, deep convolutional neural networks have seen encouraging advances in compression artifact reduction. However, most of them focus on the restoration of the luma channel without considering the chroma components. Besides, most deep convolutional neural networks are hard to deploy in practical applications because of their high model complexity. In this article, we propose a dual-stage feedback network (DSFN) for lightweight color image compression artifact reduction. Specifically, we propose a novel curriculum learning strategy to drive a DSFN to reduce color image compression artifacts in a luma-to-RGB manner. In the first stage, the DSFN is dedicated to reconstructing the luma channel, whose high-level features containing rich structural information are then rerouted to the second stage by a feedback connection to guide the RGB image restoration. Furthermore, we present a novel enhanced feedback block for efficient high-level feature extraction, in which an adaptive iterative self-refinement module is carefully designed to refine the low-level features progressively, and an enhanced separable convolution is advanced to exploit multiscale image information fully. Extensive experiments show the notable advantage of our DSFN over several state-of-the-art methods in both quantitative indices and visual effects with lower model complexity.
Read full abstract