Abstract

AbstractRecently, learned image compression methods based on entropy minimization have achieved superior results compared with conventional image codecs such as BPG and JPEG2000. However, they leverage single Gaussian models, which have a limited ability to approximate various irregular distributions of transformed latent representations, resulting in suboptimal coding efficiency. Furthermore, existing methods focus on constructing effective entropy models, rather than utilizing modern architectural techniques. In this paper, we propose a novel joint learning scheme called JointIQ‐Net that incorporates image compression and quality enhancement technologies with improved entropy minimization based on a newly adopted Gaussian mixture model. We also exploit global context to estimate the distributions of latent representations precisely. The results of extensive experiments demonstrate that JointIQ‐Net achieves remarkable performance improvements in terms of coding efficiency compared with existing learned image compression methods and conventional codecs. To the best of our knowledge, ours is the first learned image compression method that outperforms VVC intra‐coding in terms of both PSNR and MS‐SSIM.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call