Rate-Complexity Optimization in Lossless Neural-Based Image Compression

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Neural networks are now widely used in image compression. Network architecture and hyperparameter choices impact both compression performance and complexity, but (as we show) there are many examples where higher complexity does not entail better compression. Thus, it is desirable to perform rate-complexity optimization over the space of hyperparameters. In the context of neural-based lossless image compression, we propose an algorithm that traces hyperparameter choices of points on or near the lower convex hull of the cloud of rate-complexity points produced by all combinations of hyperparameters, without having to know in advance the rate-complexity performance of each combination. This reduces the training/evaluation load of the rate-complexity optimization by over 50% in our experiments, for each of three measures of complexity: multiply/add operations per pixel, Joules per pixel, and encoded network size.

Save Icon
Up Arrow
Open/Close