Abstract

The windowed Huffman algorithm is introduced. The Huffman code tree is constructed based on the probabilities of symbols' occurrences within finite history in this windowed algorithm. A window buffer is used to store the most recently processed symbols. Experimental results show that by choosing a suitable window size the length of codes generated by the windowed Huffman algorithm is shorter than those generated by the static Huffman algorithm, dynamic algorithms, and the residual Huffman algorithm, and even smaller than the first-order entropy. Furthermore, three policies to adjust window size dynamically are also discussed. The windowed Huffman algorithm with an adaptive-size window performs as well as, or better than, that with an optimal fixed-size window. The new algorithm is well suited for online encoding and decoding of data with varying probability distributions.< >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call