Abstract

This paper proposes a high-throughput lossless image-compression algorithm based on Golomb–Rice coding and its hardware architecture. The proposed solution increases compression ratios (CRs) while preserving the throughput by taking advantage of a novel parallel variable-length sign coding (PVSC) algorithm that reduces the sign bits to achieve a higher CR. In addition, the proposed solution adopts and modifies the two existing compression algorithms to improve the overall compression performance. The experimental results show that the proposed solution yields an average CR of 3.12, which is higher than those achieved with the previous algorithms. The hardware implementation of the proposed solution for an $8 \times 8$ block unit achieves a throughput of 18 GBps and 24 GBps when encoding and decoding, respectively. This hardware performance is enough to handle $7680\times 4320$ @240-Hz image processing.

Highlights

  • In recent years, high-definition (HD) images, such as full HD (1920 × 1080), quad HD (QHD, 2560 × 1440), and ultra HD (UHD, 3840 × 2160 or 7680 × 4320) have been used in mobile devices, PCs, and TVs

  • The effects of the parallel variable-length sign coding (PVSC) algorithm on data compression are analyzed, and the proposed algorithms are compared with others [8], [18], [37] in terms of compression ratios (CRs)

  • The hardware implementation is designed with Verilog HDL and its evaluation is expressed in terms of clock frequency, throughput and unit/total area in a 55-nm cell library

Read more

Summary

INTRODUCTION

High-definition (HD) images, such as full HD (1920 × 1080), quad HD (QHD, 2560 × 1440), and ultra HD (UHD, 3840 × 2160 or 7680 × 4320) have been used in mobile devices, PCs, and TVs. The studies in [8], [18], [36]–[38], [42], [43] increase CRs by proposing block-based prediction algorithms or by enhancing entropy coding algorithms These studies implement hardware for high-resolution image processing. The hardware architecture in [8] can perform massively parallel processing in the variable-length coding stage and in the prediction stage It achieves lossless pixel throughput by compressing and decompressing blocks during every cycle with 6-12 times the performance improvement compared to the comparative models [13], [26]–[29]. We propose a lossless compression solution (algorithms and architecture) that increases CRs while offering the massively parallel pixel-processing architecture suggested in [8] For this purpose, following three techniques are utilized:.

RELATED WORK
ZeroDT
KSplitter
EXPERIMENTAL RESULTS
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.