Abstract

In the present era of the internet and multimedia, image compression techniques are essential to improve image and video performance in terms of storage space, network bandwidth usage, and secure transmission. A number of image compression methods are available with largely differing compression ratios and coding complexity. In this paper we propose a new method for compressing high-resolution images based on the Discrete Fourier Transform (DFT) and Matrix Minimization (MM) algorithm. The method consists of transforming an image by DFT yielding the real and imaginary components. A quantization process is applied to both components independently aiming at increasing the number of high frequency coefficients. The real component matrix is separated into Low Frequency Coefficients (LFC) and High Frequency Coefficients (HFC). Finally, the MM algorithm followed by arithmetic coding is applied to the LFC and HFC matrices. The decompression algorithm decodes the data in reverse order. A sequential search algorithm is used to decode the data from the MM matrix. Thereafter, all decoded LFC and HFC values are combined into one matrix followed by the inverse DFT. Results demonstrate that the proposed method yields high compression ratios over 98% for structured light images with good image reconstruction. Moreover, it is shown that the proposed method compares favorably with the JPEG technique based on compression ratios and image quality.

Highlights

  • The exchange of uncompressed digital images requires considerable amounts of storage space and network bandwidth

  • It is clear that sharing multimedia-based platforms such as Facebook and Instagram lead to widespread exchange of digital images over the Internet [1]

  • We propose a new algorithm to compress digital images based on the Discrete Fourier Transform (DFT) in conjunction with the Matrix Minimization method as proposed in

Read more

Summary

Introduction

The exchange of uncompressed digital images requires considerable amounts of storage space and network bandwidth. It is clear that sharing multimedia-based platforms such as Facebook and Instagram lead to widespread exchange of digital images over the Internet [1]. This has led to efforts to improve and fine-tune present compression algorithms along with new algorithms proposed by the research community to reduce image size whilst maintaining the best level of quality. Redundant data in digital images come from the fact that pixels in digital images are highly correlated to a level where reducing this correlation cannot be noticed by the human eye (Human Visual System) [4,5]. The intension is to preserve the low frequency values and shorten the high frequency values by a certain amount, in order to maintain the best quality with the lowest possible size [6,7]

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.