Abstract

Implicit neural representations (INRs) for various data types have gained popularity in the field of deep learning owing to their effectiveness. However, previous studies on INRs have only focused on recovering original representations. This paper investigated an image compression model based on INRs using a model compression technique for entropy-constrained neural networks. Specifically, the proposed model trains a multilayer perceptron (MLP) to overfit a single image and then uses its weights to optimize its compressed representation using additive uniform noise. Accordingly, the proposed model efficiently minimizes the size of the model weight in an end-to-end manner. This training optimization process is fairly desirable for adjusting the rate of distortion for image compression. In contrast to other model compression techniques, the proposed model is implemented without additional training process or memory cost. By introducing entropy loss, this paper demonstrated that the proposed model can be used to preserve high image quality while maintaining smaller model size. The experimental results demonstrated that the proposed model achieved comparable performance to conventional image compression models without incurring high storage costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call