Abstract

Discrete cosine transform emerged as a popular mathematical tool in past decade and widely used in image compression algorithm due to its high energy compaction capacity. But this energy compaction has some limitation in blocking artifacts that overcome by another mathematical tool wavelet transform based compression which have both frequency and spatial component study at same time. In this paper we study a comparison between DCT based image compression and wavelet based image compression using CDF9/7 wavelet as used in JPEG-2000 Standards based on common encoding scheme known as Huffman encoding for both. We also analyzed a trend based on different level of wavelet transform on image size and image quality based on mean square error. Keywords: Wavelet transform, Image Compression, Discrete Cosine Transform, Fast wavelet algorithm, Huffman encoding. I. Introduction Image compression has a number of steps including conversion of analog signals into digital form. The first step involve in this process is sampling. The points we get after digitization of a continuous image function are called sampling points. These sampling points are ordered in a plane in a grid manner. Therefore, we can call our digital image a geometrical structure, commonly a matrix. After sampling even that the pixel values contain real values. When we change these real values to digital values, this transition and its digital equivalent is called quantization. The number of levels of quantization should be high so that the boundaries in the image can be easily distinguished and then only the digital image can be closely approximated to original continuous image function. The third step is transform step using any popular transform technique. So basic steps in image compression includes Sampling, Quantization and Transform step (1)(2).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call