Abstract

An end-to-end image compression framework based on deep residual learning is proposed. Three levels of residual learning are adopted to improve the compression quality: (1) the ResNet structure; (2) the deep channel residual learning for quantization; and (3) the global residual learning in full resolution. Residual distribution is commonly a single Gaussian distribution, and relatively easy to be learned by the neural network. Furthermore, an attention model is combined in the proposed framework to compress regions of an image with different bits adaptively. Across the experimental results on Kodak PhotoCD test set, the proposed approach outperforms JPEG and JPEG2000 by PSNR and MS-SSIM at low BPP (bit per pixel). Furthermore, it can produce much better visual quality. Compared to the state-of-the-art deep learning-based codecs, the proposed approach also achieves competitive performance.

Highlights

  • Image compression is a fundamental and well-studied problem in the data compression field.Typical conventional compression algorithms such as JPEG [1] and JPEG2000 [2] are based on transform coding theory [3]

  • Lossy image compression frameworks based on deep learning (DL) have raised interest in both deep learning and image processing communities [4,5,6,7,8,9,10]

  • The distortion is only assessed by mean square error (MSE) loss

Read more

Summary

Introduction

Image compression is a fundamental and well-studied problem in the data compression field.Typical conventional compression algorithms such as JPEG [1] and JPEG2000 [2] are based on transform coding theory [3]. Lossy image compression frameworks based on deep learning (DL) have raised interest in both deep learning and image processing communities [4,5,6,7,8,9,10]. These approaches are competitive with the existing modern engineered codecs such as JPEG [1], JPEG2000 [2], WebP [11], and BPG [12], several issues and challenges still need to be addressed. A residual learning framework is proposed by introducing some novel technologies to properly improve the compression quality

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call