Abstract

Modern daily life activities result in a huge amount of data, which creates a big challenge for storing and communicating them. As an example, hospitals produce a huge amount of data on a daily basis, which makes a big challenge to store it in a limited storage or to communicate them through the restricted bandwidth over the Internet. Therefore, there is an increasing demand for more research in data compression and communication theory to deal with such challenges. Such research responds to the requirements of data transmission at high speed over networks. In this paper, we focus on deep analysis of the most common techniques in image compression. We present a detailed analysis of run-length, entropy and dictionary based lossless image compression algorithms with a common numeric example for a clear comparison. Following that, the state-of-the-art techniques are discussed based on some bench-marked images. Finally, we use standard metrics such as average code length (ACL), compression ratio (CR), pick signal-to-noise ratio (PSNR), efficiency, encoding time (ET) and decoding time (DT) in order to measure the performance of the state-of-the-art techniques.

Highlights

  • Modern daily life activities result in a huge amount of data, which creates a big challenge for storing and communicating them

  • Encoding time, decoding time, average code length, compression ratio, pick signal-to-noise ratio (PSNR) and efficiency have been used to analyze the performance of the algorithms

  • The encoding time, decoding time, average code length, and compression ratio are shown in Tables 7–10, whereas Figures 7–11 show the graphical representation of encoding time, decoding time, average code length, compression ratio and efficiency, respectively, based on the twenty-five images

Read more

Summary

Introduction

An image is partitioned into non-overlapping blocks and encoded every block individually utilizing arithmetic coding This algorithm provides 9.7% better results than JPEG-LS reported in [32,33]. A code table with 4096 common entries are utilized and the fixed codes 0–255 are assigned first in a table as an initial entry because an image can have a maximum of 256 different pixels from 0 to 255 It works better in case of text compression reported in [42]. We use a common numeric data set and shows the step by step details of implementation procedures of the state-of-the-art data compression techniques mentioned This demonstrates the comparisons among the methods and explicates the quandaries of the methods based on the results of some benchmarked images.

Run-Length Encoding Procedure
Run-Length Decoding Procedure
Analysis of Run-Length Coding Procedure
Shannon–Fano Coding
Shannon–Fano Decoding Style
Analysis of Shannon–Fano Coding
Huffman Coding
Huffman Encoding Style
Huffman Decoding Style
Analysis of Huffman Coding
LZW Encoding Procedure
Arithmetic Coding
Arithmetic Encoding Procedure
Arithmetic Decoding Procedure
Analysis of Arithmetic Coding Procedure
Experimental Results and Analysis
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call