Abstract

Image compression has always been an important aspect in computer vision problems where image transmission is involved. Hence, many different image compression techniques have been developed over time. Image compression algorithms using neural network have already been implemented but is quite complex and difficult to run on conventional hardware. Most neural compression algorithms are used in general to input encoding and reconstruction, but the proposed algorithm takes advantage of over-fitting on the train dataset to improve reconstruction accuracy. In this paper, we propose a convolution neural network for lossy image compression and another neural network for the reconstruction of the image which uses encoder–decoder architecture. This kind of architecture enables us to process a large number of images in parallel, thus decreasing the overall time taken to compress a large number of images. This algorithm is easy to run on highly parallel hardware architectures, and it can process large number of images at a time. This has a good application in data centers with a large amount of computing resources and parallel hardware.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call