Abstract

With the increasing popularity of deep learning in image processing, many learned lossless image compression methods have been proposed recently. One group of algorithms are based on scale-based auto-regressive models and can provide competitive compression performance while also allowing easily parallelized computations and short encoding/decoding times. However, they use large neural networks and have high computational requirements. This paper presents an interpolation based learned lossless image compression method which falls in the scale-based auto-regressive models group. The method achieves compression performance better than or on par with the recent scale-based auto-regressive models, yet requires more than 10x less neural network parameters (0.19M) and encoding/decoding computation complexity. These achievements are due to the contributions/findings in the overall system and neural network architecture design, such as sharing interpolator neural networks across different scales, using separate neural networks for different parameters of the probability distribution model and performing the processing in the YCoCg-R color space instead of the RGB color space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call