Abstract
The LZW compression is a well known patented lossless compression method used in Unix file compression utility and in GIF and TIFF image formats. It converts an input string of characters (or 8-bit unsigned integers) into a string of codes using a code table (or dictionary) that maps strings into codes. Since the code table is generated by repeatedly adding newly appeared substrings during the conversion, it is very hard to parallelize LZW compression. The main purpose of this paper is to accelerate LZW compression for TIFF images using a CUDA-enabled GPU. Our goal is to implement LZW compression algorithm using several acceleration techniques using CUDA, although it is a very hard task. Suppose that a GPU generates a resulting image generated by a computer graphics or image processing CUDA program and we want to archive it as a LZW-compressed TIFF image in the SSD connected to the host PC. We focused on the following two scenarios. Scenario~1: the resulting image is compressed using a GPU and written in the SSD through the host PC, and Scenario~2: it is transferred to the host PC, and compressed and written in the SSD using a CPU. The experimental results using NVIDIA GeForce GTX 980 and Intel Core i7 4790 show that Scenario 1 using our LZW compression implemented in a GPU is about 3 times faster than Scenario 2. From this fact, we can say that it makes sense to compress images using a GPU to archive them in the SSD.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.