Abstract
A high-throughput memory-efficient arithmetic coder architecture for the set partitioning in hierarchical trees (SPIHT) image compression is proposed based on a simple context model in this paper. The architecture benefits from various optimizations performed at different levels of arithmetic coding from higher algorithm abstraction to lower circuits' implementations. First, the complex context model used by software is mitigated by designing a simple context model, which just uses the brother nodes' states in the coding zerotree of SPIHT to form context symbols for the arithmetic coding. The simple context model results in a regular access pattern during reading the wavelet transform coefficients, which is convenient to the hardware implementation, but at a cost of slight performance loss. Second, in order to avoid rescanning the wavelet transform coefficients, a breadth first search SPIHT without lists algorithm is used instead of SPIHT with lists algorithm. Especially, the coding bit-planes of each zero tree are processed in parallel. Third, an out-of-order execution mechanism for different types of context is proposed that can allocate the context symbol to the idle arithmetic coding core with a different order that of the input. For the balance of the input rate of the wavelet coefficients, eight arithmetic coders are replicated in the compression system. And in one arithmetic coder, there exists four cores to process different contexts. Fourth, several dedicated circuits are designed to further improve the throughput of the architecture. The common bit detection (CBD) circuit is used for unrolling the renormalization stage of the arithmetic coding. The carry look-ahead adder (CLA) and fast multiplier-divider are also employed to shorten the critical path in the architecture. Moreover, an adaptive clock switch mechanism can stop some invalid bit-planes' clock for the power saving purpose according to the input images. Experimental results demonstrate that the proposed architecture attains a throughput of 902.464 Mb/s at its maximum and achieves savings of 20.08% in power consumption over full bit-planes coding scheme based on field-programmable gate arrays (FPGAs).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.