Abstract

The rate–distortion (RD) performance of learning-based image compression (LIC) has already surpassed that of traditional Versatile Video Coding (VVC) intra-coding. However, the gain of the compression efficiency is at the cost of high computational complexity, which is prohibitively expensive and becomes a bottleneck for practical applications. To this end, we proposed an efficient LIC method with lightweight designs for real-time practical applications, which achieves a better trade-off between compression performance and computational complexity. Specifically, residual connected lightweight attention units (RLAUs) are stacked for feature extraction, which achieves effective global spatial information embedding while keeping relatively low complexity. Meanwhile, a trainable channel-gained adaptive module (CGAM) is introduced into the nonlinear transform network and multi-stage context model. This module re-distributes the importance of different channels, further improving compression efficiency. The experimental results demonstrate that the proposed method achieves better compression efficiency than VVC intra-coding. Furthermore, compared to other state-of-the-art LIC approaches, the proposed method significantly reduces the coding time while maintaining comparable coding efficiency. The source codes are available at https://github.com/llsurreal919/LightweightLIC.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.