Abstract

With advancements in technology and the Data Boom due to the Internet of things (IoT), the world now generates considerable amounts of data. Take the example of self-driving cars themselves; 1 car generates about one terabyte of data in a day. The size of this data makes it difficult to 'move around.'; hence we require compression algorithms that are efficient, fast, and easy. Moreover, this is where methods like Huffman Coding come into play. The objective of the paper is to compare the efficiencies of data compression algorithms like Huffman Encoding with algorithms of similar or lesser complexity. It is to perform a comparative study of Huffman code with various other codes such as Shannon fano coding, Run-length encoding, ROT13, and Lempel Ziv Welch (LZW), based on the benchmark of compression rate, i.e. message size versus time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call