Abstract

In this paper, we present a very large scale integration (VLSI) design of the adaptive binary arithmetic coding for lossless data compression and decompression. The main modules of it consist of an adaptive probability estimation modeler (APEM), an arithmetic operation unit (AOU), and a normalization unit (NU), A new bit-stuffing technique, which simultaneously solves both the carry-over and source-termination problems efficiently, is proposed and designed in an NU. The APEM estimates the conditional probabilities of input symbols efficiently using a table lookup approach with 1.28-kbytes memory. A new formula which efficiently reflects the change of symbols' occurring probability is proposed, and a complete binary tree is used to set up the values in the probability table of an APEM. In an AOU, a simplified parallel multiplier, which requires approximately half of the area of a standard parallel multiplier while maintaining a good compression ratio, is proposed. Owing to these novel designs, the designed chip can compress any type of data with an efficient compression ratio, An asynchronous interface circuit with an 8-b first-in first-out (FIFO) buffer for input/output (UO) communication of the chip is also designed. Thus, both UO and compression operations in the chip can be done simultaneously. Moreover, the concept of design for testability is used and a scan path is implemented in the chip. A prototype 0.8-/spl mu/m chip has been designed and fabricated in a reasonable die size. This chip can yield a processing rate of 3 Mb/s with a clock rate of 25 MHz.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call