This paper proposes a novel approach for lossless image compression. The proposed coding approach employs a deep-learning-based method to compute the prediction for each pixel, and a context-tree-based bit-plane codec to encode the prediction errors. First, a novel deep learning-based predictor is proposed to estimate the residuals produced by traditional prediction methods. It is shown that the use of a deep-learning paradigm substantially boosts the prediction accuracy compared with the traditional prediction methods. Second, the prediction error is modeled by a context modeling method and encoded using a novel context-tree-based bit-plane codec. Codec profiles performing either one or two coding passes are proposed, trading off complexity for compression performance. The experimental evaluation is carried out on three different types of data: photographic images, lenslet images, and video sequences. The experimental results show that the proposed lossless coding approach systematically and substantially outperforms the state-of-the-art methods for each type of data.
Read full abstract