Abstract
A novel residual-error prediction method based on deep learning with application in lossless image compression is introduced. The proposed method employs machine learning tools to minimise the residual error of the employed prediction tools. Experimental results demonstrate average bitrate savings of 32% over the state-of-the-art in lossless image compression. To the best of the authors’ knowledge, this Letter is the first to propose a deep-learning based method for residual-error prediction.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have