While neural-network-based lossy image compression methods have shown impressive performance, most of them output a fixed-length coding using a trained-specific network. However, it is essential to support the variable-length compression or meet a target rate with a high-coding performance in practice. This paper steps forward the neural-network-based image compression method, making it possible for a single network model to generate variable compression rates. Our network model combines an auto-encoder (AE) and a generative adversarial network (GAN) for generative compression. We introduce a noise interference mechanism to train the feature representation produced by the encoder, making the feature nodes training controllable and distributed from top to bottom according to their importance in feature expression. Based on this importance distribution, the latent nodes are quantized into bits and the variable-length compression can be achieved by discarding bits of those less-important feature nodes to meet the compression target. We propose several noise interference methods, and the experiments confirm the feasibility of method Random-add and Dropout in controllable learning. Further experiments illustrate that our compression method can not only achieve variable-length compression but also can recover high-quality compressed images at extremely low bit rates, outperforming that with a fixed rate.
Read full abstract